diff --git a/v1.22/gardener-alicloud/PRODUCT.yaml b/v1.22/gardener-alicloud/PRODUCT.yaml new file mode 100644 index 0000000000..2603d5fee5 --- /dev/null +++ b/v1.22/gardener-alicloud/PRODUCT.yaml @@ -0,0 +1,9 @@ +vendor: SAP +name: Gardener (https://github.com/gardener/gardener) shoot cluster deployed on ALICLOUD +version: v1.34.0 +website_url: https://gardener.cloud +repo_url: https://github.com/gardener/ +documentation_url: https://github.com/gardener/documentation/wiki +product_logo_url: https://raw.githubusercontent.com/gardener/documentation/master/images/logo_w_saplogo.svg +type: installer +description: The Gardener implements automated management and operation of Kubernetes clusters as a service and aims to support that service on multiple Cloud providers. \ No newline at end of file diff --git a/v1.22/gardener-alicloud/README.md b/v1.22/gardener-alicloud/README.md new file mode 100644 index 0000000000..647dbcb2f7 --- /dev/null +++ b/v1.22/gardener-alicloud/README.md @@ -0,0 +1,80 @@ +# Reproducing the test results: + +## Install Gardener on your Kubernetes Landscape +Check out https://github.com/gardener/garden-setup for a more detailed instruction and additional information. To install Gardener in your base cluster, a command line tool [sow](https://github.com/gardener/sow) is used. Use the provided Docker image that already contains `sow` and all required tools. To execute `sow` you call a [wrapper script](https://github.com/gardener/sow/tree/master/docker/bin) which starts `sow` in a Docker container (Docker will download the image from [eu.gcr.io/gardener-project/sow](http://eu.gcr.io/gardener-project/sow) if it is not available locally yet). Docker executes the sow command with the given arguments, and mounts parts of your file system into that container so that `sow` can read configuration files for the installation of Gardener components, and can persist the state of your installation. After `sow`'s execution Docker removes the container again. + +1. Clone the `sow` repository and add the path to our [wrapper script](https://github.com/gardener/sow/tree/master/docker/bin) to your `PATH` variable so you can call `sow` on the command line. + + ```bash + # setup for calling sow via the wrapper + git clone "https://github.com/gardener/sow" + cd sow + export PATH=$PATH:$PWD/docker/bin + ``` + +2. Create a directory `landscape` for your Gardener landscape and clone this repository into a subdirectory called `crop`: + + ```bash + cd .. + mkdir landscape + cd landscape + git clone "https://github.com/gardener/garden-setup" crop + ``` + +3. If you don't have your `kubekonfig` stored locally somewhere yet, download it. For example, for GKE you would use the following command: + + ```bash + gcloud container clusters get-credentials --zone --project + ``` + +4. Save your `kubeconfig` somewhere in your `landscape` directory. For the remaining steps we will assume that you saved it using file path `landscape/kubeconfig`. + +5. In your `landscape` directory, create a configuration file called `acre.yaml`. The structure of the configuration file is described [below](#configuration-file-acreyaml). Note that the relative file path `./kubeconfig` file must be specified in field `landscape.cluster.kubeconfig` in the configuration file. Checkout [configuration file acre](https://github.com/gardener/garden-setup#configuration-file-acreyaml) for configuration details. + + > Do not use file `acre.yaml` in directory `crop`. This file is used internally by the installation tool. + +6. If you created the base cluster using GKE convert your `kubeconfig` file to one that uses basic authentication with Google-specific configuration parameters: + + ```bash + sow convertkubeconfig + ``` + When asked for credentials, enter the ones that the GKE dashboard shows when clicking on `show credentials`. + + `sow` will replace the file specified in `landscape.cluster.kubeconfig` of your `acre.yaml` file by a kubeconfig file that uses basic authentication. + +7. In your first terminal window, use the following command to check in which order the components will be installed. Nothing will be deployed yet and you can test this way if your syntax in `acre.yaml` is correct: + + ```bash + sow order -A + ``` + +8. If there are no error messages, use the following command to deploy Gardener on your base cluster: + + ```bash + sow deploy -A + ``` + +9. `sow` now starts to install Gardener in your base cluster. The installation can take about 30 minutes. `sow` prints out status messages to the terminal window so that you can check the status of the installation. The other terminal window will show the newly created Kubernetes resources after a while and if their deployment was successful. Wait until the last component is deployed and all created Kubernetes resources are in status `Running`. + +10. Use the following command to find out the URL of the Gardener dashboard. + + ```bash + sow url + ``` + + +## Create Kubernetes Cluster + +Login to SAP Gardener Dashboard to create a Kubernetes Clusters on Amazon Web Services, Microsoft Azure, Google Cloud Platform, Alibaba Cloud, or OpenStack cloud provider. + +## Launch E2E Conformance Tests +Set the `KUBECONFIG` as path to the kubeconfig file of your newly created cluster (you can find the kubeconfig e.g. in the Gardener dashboard). Follow the instructions below to run the Kubernetes e2e conformance tests. Adjust values for arguments `k8sVersion` and `cloudprovider` respective to your new cluster. + +```bash +#first set KUBECONFIG to your cluster +docker run -ti -e --rm -v $KUBECONFIG:/mye2e/shoot.config golang:1.13 bash +# run all commands below within container +go get github.com/gardener/test-infra; cd /go/src/github.com/gardener/test-infra +export GO111MODULE=on; export E2E_EXPORT_PATH=/tmp/export; export KUBECONFIG=/mye2e/shoot.config; export GINKGO_PARALLEL=false +go run -mod=vendor ./integration-tests/e2e --k8sVersion=1.17.1 --cloudprovider=gcp --testcasegroup="conformance" +``` \ No newline at end of file diff --git a/v1.22/gardener-alicloud/e2e.log b/v1.22/gardener-alicloud/e2e.log new file mode 100644 index 0000000000..4340a7bde9 --- /dev/null +++ b/v1.22/gardener-alicloud/e2e.log @@ -0,0 +1,13670 @@ +Conformance test: not doing test setup. +I1027 14:00:29.189794 5703 e2e.go:129] Starting e2e run "33663709-29f8-4e40-9066-22fcaa6d2004" on Ginkgo node 1 +{"msg":"Test Suite starting","total":346,"completed":0,"skipped":0,"failed":0} +Running Suite: Kubernetes e2e suite +=================================== +Random Seed: 1635343229 - Will randomize all specs +Will run 346 of 6432 specs + +Oct 27 14:00:31.879: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 14:00:31.881: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable +Oct 27 14:00:31.907: INFO: Waiting up to 10m0s for all pods (need at least 1) in namespace 'kube-system' to be running and ready +Oct 27 14:00:31.965: INFO: 24 / 24 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) +Oct 27 14:00:31.965: INFO: expected 12 pod replicas in namespace 'kube-system', 12 are Running and Ready. +Oct 27 14:00:31.965: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start +Oct 27 14:00:31.979: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'apiserver-proxy' (0 seconds elapsed) +Oct 27 14:00:31.979: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'calico-node' (0 seconds elapsed) +Oct 27 14:00:31.979: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'csi-disk-plugin-alicloud' (0 seconds elapsed) +Oct 27 14:00:31.979: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) +Oct 27 14:00:31.979: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-exporter' (0 seconds elapsed) +Oct 27 14:00:31.979: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-problem-detector' (0 seconds elapsed) +Oct 27 14:00:31.979: INFO: e2e test version: v1.22.2 +Oct 27 14:00:31.983: INFO: kube-apiserver version: v1.22.2 +Oct 27 14:00:31.983: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 14:00:31.989: INFO: Cluster IP family: ipv4 +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-auth] ServiceAccounts + should mount projected service account token [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:00:31.989: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename svcaccounts +W1027 14:00:32.025462 5703 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ +Oct 27 14:00:32.025: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled +Oct 27 14:00:32.038: INFO: PSP annotation exists on dry run pod: "extensions.gardener.cloud.kube-system.csi-disk-plugin-alicloud"; assuming PodSecurityPolicy is enabled +W1027 14:00:32.042663 5703 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ +W1027 14:00:32.047846 5703 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ +Oct 27 14:00:32.061: INFO: Found ClusterRoles; assuming RBAC is enabled. +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in svcaccounts-1646 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should mount projected service account token [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test service account token: +Oct 27 14:00:32.195: INFO: Waiting up to 5m0s for pod "test-pod-16e9dcbd-4720-4dd5-8ddb-cc39a2ce2038" in namespace "svcaccounts-1646" to be "Succeeded or Failed" +Oct 27 14:00:32.200: INFO: Pod "test-pod-16e9dcbd-4720-4dd5-8ddb-cc39a2ce2038": Phase="Pending", Reason="", readiness=false. Elapsed: 4.909893ms +Oct 27 14:00:34.205: INFO: Pod "test-pod-16e9dcbd-4720-4dd5-8ddb-cc39a2ce2038": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010537465s +Oct 27 14:00:36.212: INFO: Pod "test-pod-16e9dcbd-4720-4dd5-8ddb-cc39a2ce2038": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0172203s +Oct 27 14:00:38.218: INFO: Pod "test-pod-16e9dcbd-4720-4dd5-8ddb-cc39a2ce2038": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.022995861s +STEP: Saw pod success +Oct 27 14:00:38.218: INFO: Pod "test-pod-16e9dcbd-4720-4dd5-8ddb-cc39a2ce2038" satisfied condition "Succeeded or Failed" +Oct 27 14:00:38.222: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod test-pod-16e9dcbd-4720-4dd5-8ddb-cc39a2ce2038 container agnhost-container: +STEP: delete the pod +Oct 27 14:00:38.241: INFO: Waiting for pod test-pod-16e9dcbd-4720-4dd5-8ddb-cc39a2ce2038 to disappear +Oct 27 14:00:38.245: INFO: Pod test-pod-16e9dcbd-4720-4dd5-8ddb-cc39a2ce2038 no longer exists +[AfterEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:00:38.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svcaccounts-1646" for this suite. +•{"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":346,"completed":1,"skipped":36,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicationController + should test the lifecycle of a ReplicationController [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:00:38.258: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename replication-controller +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replication-controller-3599 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 +[It] should test the lifecycle of a ReplicationController [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a ReplicationController +STEP: waiting for RC to be added +STEP: waiting for available Replicas +STEP: patching ReplicationController +STEP: waiting for RC to be modified +STEP: patching ReplicationController status +STEP: waiting for RC to be modified +STEP: waiting for available Replicas +STEP: fetching ReplicationController status +STEP: patching ReplicationController scale +STEP: waiting for RC to be modified +STEP: waiting for ReplicationController's scale to be the max amount +STEP: fetching ReplicationController; ensuring that it's patched +STEP: updating ReplicationController status +STEP: waiting for RC to be modified +STEP: listing all ReplicationControllers +STEP: checking that ReplicationController has expected values +STEP: deleting ReplicationControllers by collection +STEP: waiting for ReplicationController to have a DELETED watchEvent +[AfterEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:00:43.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replication-controller-3599" for this suite. +•{"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","total":346,"completed":2,"skipped":64,"failed":0} +SSSSSSSS +------------------------------ +[sig-node] Downward API + should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:00:43.968: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-5662 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward api env vars +Oct 27 14:00:44.127: INFO: Waiting up to 5m0s for pod "downward-api-4e1af101-ef16-463f-a500-fecd873b5648" in namespace "downward-api-5662" to be "Succeeded or Failed" +Oct 27 14:00:44.132: INFO: Pod "downward-api-4e1af101-ef16-463f-a500-fecd873b5648": Phase="Pending", Reason="", readiness=false. Elapsed: 4.293573ms +Oct 27 14:00:46.137: INFO: Pod "downward-api-4e1af101-ef16-463f-a500-fecd873b5648": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010057771s +STEP: Saw pod success +Oct 27 14:00:46.137: INFO: Pod "downward-api-4e1af101-ef16-463f-a500-fecd873b5648" satisfied condition "Succeeded or Failed" +Oct 27 14:00:46.143: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod downward-api-4e1af101-ef16-463f-a500-fecd873b5648 container dapi-container: +STEP: delete the pod +Oct 27 14:00:46.167: INFO: Waiting for pod downward-api-4e1af101-ef16-463f-a500-fecd873b5648 to disappear +Oct 27 14:00:46.171: INFO: Pod downward-api-4e1af101-ef16-463f-a500-fecd873b5648 no longer exists +[AfterEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:00:46.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-5662" for this suite. +•{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":346,"completed":3,"skipped":72,"failed":0} +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + patching/updating a validating webhook should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:00:46.185: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-6023 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 27 14:00:47.272: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 27 14:00:50.299: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] patching/updating a validating webhook should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a validating webhook configuration +STEP: Creating a configMap that does not comply to the validation webhook rules +STEP: Updating a validating webhook configuration's rules to not include the create operation +STEP: Creating a configMap that does not comply to the validation webhook rules +STEP: Patching a validating webhook configuration's rules to include the create operation +STEP: Creating a configMap that does not comply to the validation webhook rules +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:00:50.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-6023" for this suite. +STEP: Destroying namespace "webhook-6023-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":346,"completed":4,"skipped":89,"failed":0} +SSSSSSS +------------------------------ +[sig-network] Services + should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:00:50.605: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-3092 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating service in namespace services-3092 +STEP: creating service affinity-clusterip-transition in namespace services-3092 +STEP: creating replication controller affinity-clusterip-transition in namespace services-3092 +I1027 14:00:50.765031 5703 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-3092, replica count: 3 +I1027 14:00:53.816958 5703 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I1027 14:00:56.819110 5703 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I1027 14:00:59.819714 5703 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Oct 27 14:00:59.829: INFO: Creating new exec pod +Oct 27 14:01:02.850: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-3092 exec execpod-affinity77fhq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' +Oct 27 14:01:03.147: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" +Oct 27 14:01:03.147: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 14:01:03.147: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-3092 exec execpod-affinity77fhq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.30.201.241 80' +Oct 27 14:01:03.430: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.30.201.241 80\nConnection to 172.30.201.241 80 port [tcp/http] succeeded!\n" +Oct 27 14:01:03.430: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 14:01:03.442: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-3092 exec execpod-affinity77fhq -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.30.201.241:80/ ; done' +Oct 27 14:01:03.788: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.201.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.201.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.201.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.201.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.201.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.201.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.201.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.201.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.201.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.201.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.201.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.201.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.201.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.201.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.201.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.201.241:80/\n" +Oct 27 14:01:03.788: INFO: stdout: "\naffinity-clusterip-transition-55th9\naffinity-clusterip-transition-55th9\naffinity-clusterip-transition-55th9\naffinity-clusterip-transition-55th9\naffinity-clusterip-transition-55th9\naffinity-clusterip-transition-55th9\naffinity-clusterip-transition-55th9\naffinity-clusterip-transition-55th9\naffinity-clusterip-transition-55th9\naffinity-clusterip-transition-55th9\naffinity-clusterip-transition-55th9\naffinity-clusterip-transition-55th9\naffinity-clusterip-transition-55th9\naffinity-clusterip-transition-55th9\naffinity-clusterip-transition-55th9\naffinity-clusterip-transition-55th9" +Oct 27 14:01:03.788: INFO: Received response from host: affinity-clusterip-transition-55th9 +Oct 27 14:01:03.788: INFO: Received response from host: affinity-clusterip-transition-55th9 +Oct 27 14:01:03.788: INFO: Received response from host: affinity-clusterip-transition-55th9 +Oct 27 14:01:03.788: INFO: Received response from host: affinity-clusterip-transition-55th9 +Oct 27 14:01:03.788: INFO: Received response from host: affinity-clusterip-transition-55th9 +Oct 27 14:01:03.788: INFO: Received response from host: affinity-clusterip-transition-55th9 +Oct 27 14:01:03.788: INFO: Received response from host: affinity-clusterip-transition-55th9 +Oct 27 14:01:03.788: INFO: Received response from host: affinity-clusterip-transition-55th9 +Oct 27 14:01:03.788: INFO: Received response from host: affinity-clusterip-transition-55th9 +Oct 27 14:01:03.788: INFO: Received response from host: affinity-clusterip-transition-55th9 +Oct 27 14:01:03.788: INFO: Received response from host: affinity-clusterip-transition-55th9 +Oct 27 14:01:03.788: INFO: Received response from host: affinity-clusterip-transition-55th9 +Oct 27 14:01:03.788: INFO: Received response from host: affinity-clusterip-transition-55th9 +Oct 27 14:01:03.788: INFO: Received response from host: affinity-clusterip-transition-55th9 +Oct 27 14:01:03.788: INFO: Received response from host: affinity-clusterip-transition-55th9 +Oct 27 14:01:03.788: INFO: Received response from host: affinity-clusterip-transition-55th9 +Oct 27 14:01:33.789: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-3092 exec execpod-affinity77fhq -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.30.201.241:80/ ; done' +Oct 27 14:01:34.136: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.201.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.201.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.201.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.201.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.201.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.201.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.201.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.201.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.201.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.201.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.201.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.201.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.201.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.201.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.201.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.201.241:80/\n" +Oct 27 14:01:34.136: INFO: stdout: "\naffinity-clusterip-transition-7jjj4\naffinity-clusterip-transition-55th9\naffinity-clusterip-transition-55th9\naffinity-clusterip-transition-7jjj4\naffinity-clusterip-transition-7jjj4\naffinity-clusterip-transition-7jjj4\naffinity-clusterip-transition-55th9\naffinity-clusterip-transition-smhlh\naffinity-clusterip-transition-55th9\naffinity-clusterip-transition-smhlh\naffinity-clusterip-transition-55th9\naffinity-clusterip-transition-7jjj4\naffinity-clusterip-transition-smhlh\naffinity-clusterip-transition-7jjj4\naffinity-clusterip-transition-55th9\naffinity-clusterip-transition-55th9" +Oct 27 14:01:34.136: INFO: Received response from host: affinity-clusterip-transition-7jjj4 +Oct 27 14:01:34.136: INFO: Received response from host: affinity-clusterip-transition-55th9 +Oct 27 14:01:34.136: INFO: Received response from host: affinity-clusterip-transition-55th9 +Oct 27 14:01:34.136: INFO: Received response from host: affinity-clusterip-transition-7jjj4 +Oct 27 14:01:34.136: INFO: Received response from host: affinity-clusterip-transition-7jjj4 +Oct 27 14:01:34.136: INFO: Received response from host: affinity-clusterip-transition-7jjj4 +Oct 27 14:01:34.136: INFO: Received response from host: affinity-clusterip-transition-55th9 +Oct 27 14:01:34.136: INFO: Received response from host: affinity-clusterip-transition-smhlh +Oct 27 14:01:34.136: INFO: Received response from host: affinity-clusterip-transition-55th9 +Oct 27 14:01:34.136: INFO: Received response from host: affinity-clusterip-transition-smhlh +Oct 27 14:01:34.136: INFO: Received response from host: affinity-clusterip-transition-55th9 +Oct 27 14:01:34.136: INFO: Received response from host: affinity-clusterip-transition-7jjj4 +Oct 27 14:01:34.136: INFO: Received response from host: affinity-clusterip-transition-smhlh +Oct 27 14:01:34.136: INFO: Received response from host: affinity-clusterip-transition-7jjj4 +Oct 27 14:01:34.136: INFO: Received response from host: affinity-clusterip-transition-55th9 +Oct 27 14:01:34.136: INFO: Received response from host: affinity-clusterip-transition-55th9 +Oct 27 14:01:34.149: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-3092 exec execpod-affinity77fhq -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.30.201.241:80/ ; done' +Oct 27 14:01:34.496: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.201.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.201.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.201.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.201.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.201.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.201.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.201.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.201.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.201.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.201.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.201.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.201.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.201.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.201.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.201.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.201.241:80/\n" +Oct 27 14:01:34.496: INFO: stdout: "\naffinity-clusterip-transition-55th9\naffinity-clusterip-transition-7jjj4\naffinity-clusterip-transition-55th9\naffinity-clusterip-transition-smhlh\naffinity-clusterip-transition-7jjj4\naffinity-clusterip-transition-7jjj4\naffinity-clusterip-transition-55th9\naffinity-clusterip-transition-smhlh\naffinity-clusterip-transition-smhlh\naffinity-clusterip-transition-55th9\naffinity-clusterip-transition-smhlh\naffinity-clusterip-transition-smhlh\naffinity-clusterip-transition-7jjj4\naffinity-clusterip-transition-smhlh\naffinity-clusterip-transition-smhlh\naffinity-clusterip-transition-55th9" +Oct 27 14:01:34.496: INFO: Received response from host: affinity-clusterip-transition-55th9 +Oct 27 14:01:34.496: INFO: Received response from host: affinity-clusterip-transition-7jjj4 +Oct 27 14:01:34.496: INFO: Received response from host: affinity-clusterip-transition-55th9 +Oct 27 14:01:34.496: INFO: Received response from host: affinity-clusterip-transition-smhlh +Oct 27 14:01:34.496: INFO: Received response from host: affinity-clusterip-transition-7jjj4 +Oct 27 14:01:34.496: INFO: Received response from host: affinity-clusterip-transition-7jjj4 +Oct 27 14:01:34.496: INFO: Received response from host: affinity-clusterip-transition-55th9 +Oct 27 14:01:34.496: INFO: Received response from host: affinity-clusterip-transition-smhlh +Oct 27 14:01:34.496: INFO: Received response from host: affinity-clusterip-transition-smhlh +Oct 27 14:01:34.496: INFO: Received response from host: affinity-clusterip-transition-55th9 +Oct 27 14:01:34.496: INFO: Received response from host: affinity-clusterip-transition-smhlh +Oct 27 14:01:34.496: INFO: Received response from host: affinity-clusterip-transition-smhlh +Oct 27 14:01:34.496: INFO: Received response from host: affinity-clusterip-transition-7jjj4 +Oct 27 14:01:34.496: INFO: Received response from host: affinity-clusterip-transition-smhlh +Oct 27 14:01:34.496: INFO: Received response from host: affinity-clusterip-transition-smhlh +Oct 27 14:01:34.496: INFO: Received response from host: affinity-clusterip-transition-55th9 +Oct 27 14:02:04.498: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-3092 exec execpod-affinity77fhq -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.30.201.241:80/ ; done' +Oct 27 14:02:04.825: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.201.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.201.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.201.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.201.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.201.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.201.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.201.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.201.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.201.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.201.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.201.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.201.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.201.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.201.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.201.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.201.241:80/\n" +Oct 27 14:02:04.825: INFO: stdout: "\naffinity-clusterip-transition-55th9\naffinity-clusterip-transition-55th9\naffinity-clusterip-transition-55th9\naffinity-clusterip-transition-55th9\naffinity-clusterip-transition-55th9\naffinity-clusterip-transition-55th9\naffinity-clusterip-transition-55th9\naffinity-clusterip-transition-55th9\naffinity-clusterip-transition-55th9\naffinity-clusterip-transition-55th9\naffinity-clusterip-transition-55th9\naffinity-clusterip-transition-55th9\naffinity-clusterip-transition-55th9\naffinity-clusterip-transition-55th9\naffinity-clusterip-transition-55th9\naffinity-clusterip-transition-55th9" +Oct 27 14:02:04.825: INFO: Received response from host: affinity-clusterip-transition-55th9 +Oct 27 14:02:04.825: INFO: Received response from host: affinity-clusterip-transition-55th9 +Oct 27 14:02:04.825: INFO: Received response from host: affinity-clusterip-transition-55th9 +Oct 27 14:02:04.825: INFO: Received response from host: affinity-clusterip-transition-55th9 +Oct 27 14:02:04.825: INFO: Received response from host: affinity-clusterip-transition-55th9 +Oct 27 14:02:04.825: INFO: Received response from host: affinity-clusterip-transition-55th9 +Oct 27 14:02:04.825: INFO: Received response from host: affinity-clusterip-transition-55th9 +Oct 27 14:02:04.825: INFO: Received response from host: affinity-clusterip-transition-55th9 +Oct 27 14:02:04.825: INFO: Received response from host: affinity-clusterip-transition-55th9 +Oct 27 14:02:04.825: INFO: Received response from host: affinity-clusterip-transition-55th9 +Oct 27 14:02:04.825: INFO: Received response from host: affinity-clusterip-transition-55th9 +Oct 27 14:02:04.825: INFO: Received response from host: affinity-clusterip-transition-55th9 +Oct 27 14:02:04.825: INFO: Received response from host: affinity-clusterip-transition-55th9 +Oct 27 14:02:04.825: INFO: Received response from host: affinity-clusterip-transition-55th9 +Oct 27 14:02:04.825: INFO: Received response from host: affinity-clusterip-transition-55th9 +Oct 27 14:02:04.825: INFO: Received response from host: affinity-clusterip-transition-55th9 +Oct 27 14:02:04.825: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-3092, will wait for the garbage collector to delete the pods +Oct 27 14:02:04.895: INFO: Deleting ReplicationController affinity-clusterip-transition took: 5.765786ms +Oct 27 14:02:04.996: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 100.420098ms +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:02:07.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-3092" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":346,"completed":5,"skipped":96,"failed":0} +SSS +------------------------------ +[sig-cli] Kubectl client Kubectl logs + should be able to retrieve and filter logs [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:02:07.431: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-3837 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[BeforeEach] Kubectl logs + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1396 +STEP: creating an pod +Oct 27 14:02:07.585: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-3837 run logs-generator --image=k8s.gcr.io/e2e-test-images/agnhost:2.32 --restart=Never --pod-running-timeout=2m0s -- logs-generator --log-lines-total 100 --run-duration 20s' +Oct 27 14:02:07.667: INFO: stderr: "" +Oct 27 14:02:07.667: INFO: stdout: "pod/logs-generator created\n" +[It] should be able to retrieve and filter logs [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Waiting for log generator to start. +Oct 27 14:02:07.667: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] +Oct 27 14:02:07.667: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-3837" to be "running and ready, or succeeded" +Oct 27 14:02:07.671: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.342186ms +Oct 27 14:02:09.760: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 2.093003305s +Oct 27 14:02:09.760: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" +Oct 27 14:02:09.760: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] +STEP: checking for a matching strings +Oct 27 14:02:09.760: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-3837 logs logs-generator logs-generator' +Oct 27 14:02:09.916: INFO: stderr: "" +Oct 27 14:02:09.916: INFO: stdout: "I1027 14:02:08.396119 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/ns/pods/2df 373\nI1027 14:02:08.596247 1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/dhbs 321\nI1027 14:02:08.796597 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/kube-system/pods/j4qm 507\nI1027 14:02:08.996940 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/default/pods/nqb 350\nI1027 14:02:09.196208 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/kube-system/pods/6p2 210\nI1027 14:02:09.396532 1 logs_generator.go:76] 5 POST /api/v1/namespaces/ns/pods/w924 227\nI1027 14:02:09.596879 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/kube-system/pods/trv7 300\nI1027 14:02:09.796169 1 logs_generator.go:76] 7 GET /api/v1/namespaces/default/pods/5rz 213\n" +STEP: limiting log lines +Oct 27 14:02:09.917: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-3837 logs logs-generator logs-generator --tail=1' +Oct 27 14:02:10.060: INFO: stderr: "" +Oct 27 14:02:10.060: INFO: stdout: "I1027 14:02:09.996512 1 logs_generator.go:76] 8 GET /api/v1/namespaces/kube-system/pods/dkn2 489\n" +Oct 27 14:02:10.060: INFO: got output "I1027 14:02:09.996512 1 logs_generator.go:76] 8 GET /api/v1/namespaces/kube-system/pods/dkn2 489\n" +STEP: limiting log bytes +Oct 27 14:02:10.060: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-3837 logs logs-generator logs-generator --limit-bytes=1' +Oct 27 14:02:10.144: INFO: stderr: "" +Oct 27 14:02:10.145: INFO: stdout: "I" +Oct 27 14:02:10.145: INFO: got output "I" +STEP: exposing timestamps +Oct 27 14:02:10.145: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-3837 logs logs-generator logs-generator --tail=1 --timestamps' +Oct 27 14:02:10.224: INFO: stderr: "" +Oct 27 14:02:10.224: INFO: stdout: "2021-10-27T14:02:10.197100545Z I1027 14:02:10.196931 1 logs_generator.go:76] 9 POST /api/v1/namespaces/ns/pods/9gd 280\n" +Oct 27 14:02:10.224: INFO: got output "2021-10-27T14:02:10.197100545Z I1027 14:02:10.196931 1 logs_generator.go:76] 9 POST /api/v1/namespaces/ns/pods/9gd 280\n" +STEP: restricting to a time range +Oct 27 14:02:12.726: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-3837 logs logs-generator logs-generator --since=1s' +Oct 27 14:02:12.887: INFO: stderr: "" +Oct 27 14:02:12.887: INFO: stdout: "I1027 14:02:11.996505 1 logs_generator.go:76] 18 GET /api/v1/namespaces/kube-system/pods/hk6r 351\nI1027 14:02:12.196858 1 logs_generator.go:76] 19 POST /api/v1/namespaces/kube-system/pods/49z 524\nI1027 14:02:12.397188 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/kube-system/pods/rwq5 341\nI1027 14:02:12.596549 1 logs_generator.go:76] 21 POST /api/v1/namespaces/default/pods/fwn 327\nI1027 14:02:12.796879 1 logs_generator.go:76] 22 POST /api/v1/namespaces/default/pods/9rg 328\n" +Oct 27 14:02:12.887: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-3837 logs logs-generator logs-generator --since=24h' +Oct 27 14:02:12.980: INFO: stderr: "" +Oct 27 14:02:12.980: INFO: stdout: "I1027 14:02:08.396119 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/ns/pods/2df 373\nI1027 14:02:08.596247 1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/dhbs 321\nI1027 14:02:08.796597 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/kube-system/pods/j4qm 507\nI1027 14:02:08.996940 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/default/pods/nqb 350\nI1027 14:02:09.196208 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/kube-system/pods/6p2 210\nI1027 14:02:09.396532 1 logs_generator.go:76] 5 POST /api/v1/namespaces/ns/pods/w924 227\nI1027 14:02:09.596879 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/kube-system/pods/trv7 300\nI1027 14:02:09.796169 1 logs_generator.go:76] 7 GET /api/v1/namespaces/default/pods/5rz 213\nI1027 14:02:09.996512 1 logs_generator.go:76] 8 GET /api/v1/namespaces/kube-system/pods/dkn2 489\nI1027 14:02:10.196931 1 logs_generator.go:76] 9 POST /api/v1/namespaces/ns/pods/9gd 280\nI1027 14:02:10.396178 1 logs_generator.go:76] 10 POST /api/v1/namespaces/default/pods/4zvs 225\nI1027 14:02:10.596555 1 logs_generator.go:76] 11 GET /api/v1/namespaces/kube-system/pods/gxq 405\nI1027 14:02:10.796859 1 logs_generator.go:76] 12 GET /api/v1/namespaces/ns/pods/9x84 342\nI1027 14:02:10.997203 1 logs_generator.go:76] 13 GET /api/v1/namespaces/ns/pods/zsz 374\nI1027 14:02:11.196538 1 logs_generator.go:76] 14 PUT /api/v1/namespaces/default/pods/kqz 500\nI1027 14:02:11.396866 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/ns/pods/dls 421\nI1027 14:02:11.597223 1 logs_generator.go:76] 16 GET /api/v1/namespaces/ns/pods/8hn 226\nI1027 14:02:11.797143 1 logs_generator.go:76] 17 GET /api/v1/namespaces/ns/pods/bc8j 398\nI1027 14:02:11.996505 1 logs_generator.go:76] 18 GET /api/v1/namespaces/kube-system/pods/hk6r 351\nI1027 14:02:12.196858 1 logs_generator.go:76] 19 POST /api/v1/namespaces/kube-system/pods/49z 524\nI1027 14:02:12.397188 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/kube-system/pods/rwq5 341\nI1027 14:02:12.596549 1 logs_generator.go:76] 21 POST /api/v1/namespaces/default/pods/fwn 327\nI1027 14:02:12.796879 1 logs_generator.go:76] 22 POST /api/v1/namespaces/default/pods/9rg 328\n" +[AfterEach] Kubectl logs + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1401 +Oct 27 14:02:12.981: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-3837 delete pod logs-generator' +Oct 27 14:02:14.386: INFO: stderr: "" +Oct 27 14:02:14.386: INFO: stdout: "pod \"logs-generator\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:02:14.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-3837" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":346,"completed":6,"skipped":99,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:02:14.400: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-5239 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name projected-configmap-test-volume-3f813a57-489f-43c1-a828-1197beb76e43 +STEP: Creating a pod to test consume configMaps +Oct 27 14:02:14.567: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2293af46-2165-4fe2-b18e-0f908dd89f8d" in namespace "projected-5239" to be "Succeeded or Failed" +Oct 27 14:02:14.572: INFO: Pod "pod-projected-configmaps-2293af46-2165-4fe2-b18e-0f908dd89f8d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.906677ms +Oct 27 14:02:16.578: INFO: Pod "pod-projected-configmaps-2293af46-2165-4fe2-b18e-0f908dd89f8d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010730681s +STEP: Saw pod success +Oct 27 14:02:16.578: INFO: Pod "pod-projected-configmaps-2293af46-2165-4fe2-b18e-0f908dd89f8d" satisfied condition "Succeeded or Failed" +Oct 27 14:02:16.583: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod pod-projected-configmaps-2293af46-2165-4fe2-b18e-0f908dd89f8d container agnhost-container: +STEP: delete the pod +Oct 27 14:02:16.601: INFO: Waiting for pod pod-projected-configmaps-2293af46-2165-4fe2-b18e-0f908dd89f8d to disappear +Oct 27 14:02:16.605: INFO: Pod pod-projected-configmaps-2293af46-2165-4fe2-b18e-0f908dd89f8d no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:02:16.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-5239" for this suite. +•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":346,"completed":7,"skipped":110,"failed":0} +SSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:02:16.618: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-8344 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0644 on node default medium +Oct 27 14:02:16.781: INFO: Waiting up to 5m0s for pod "pod-92c09de7-e8d7-4a38-8c21-18d2ed76cbdf" in namespace "emptydir-8344" to be "Succeeded or Failed" +Oct 27 14:02:16.786: INFO: Pod "pod-92c09de7-e8d7-4a38-8c21-18d2ed76cbdf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.883841ms +Oct 27 14:02:18.791: INFO: Pod "pod-92c09de7-e8d7-4a38-8c21-18d2ed76cbdf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010618573s +STEP: Saw pod success +Oct 27 14:02:18.791: INFO: Pod "pod-92c09de7-e8d7-4a38-8c21-18d2ed76cbdf" satisfied condition "Succeeded or Failed" +Oct 27 14:02:18.796: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod pod-92c09de7-e8d7-4a38-8c21-18d2ed76cbdf container test-container: +STEP: delete the pod +Oct 27 14:02:18.814: INFO: Waiting for pod pod-92c09de7-e8d7-4a38-8c21-18d2ed76cbdf to disappear +Oct 27 14:02:18.818: INFO: Pod pod-92c09de7-e8d7-4a38-8c21-18d2ed76cbdf no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:02:18.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-8344" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":8,"skipped":118,"failed":0} +SSSSSSS +------------------------------ +[sig-node] PodTemplates + should run the lifecycle of PodTemplates [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] PodTemplates + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:02:18.830: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename podtemplate +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in podtemplate-491 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should run the lifecycle of PodTemplates [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-node] PodTemplates + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:02:19.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "podtemplate-491" for this suite. +•{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":346,"completed":9,"skipped":125,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide container's cpu limit [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:02:19.025: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-9891 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should provide container's cpu limit [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 27 14:02:19.184: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fa2a9a28-f8fb-4e96-9451-a871c9f188d1" in namespace "downward-api-9891" to be "Succeeded or Failed" +Oct 27 14:02:19.189: INFO: Pod "downwardapi-volume-fa2a9a28-f8fb-4e96-9451-a871c9f188d1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.87774ms +Oct 27 14:02:21.195: INFO: Pod "downwardapi-volume-fa2a9a28-f8fb-4e96-9451-a871c9f188d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010783344s +STEP: Saw pod success +Oct 27 14:02:21.195: INFO: Pod "downwardapi-volume-fa2a9a28-f8fb-4e96-9451-a871c9f188d1" satisfied condition "Succeeded or Failed" +Oct 27 14:02:21.199: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod downwardapi-volume-fa2a9a28-f8fb-4e96-9451-a871c9f188d1 container client-container: +STEP: delete the pod +Oct 27 14:02:21.220: INFO: Waiting for pod downwardapi-volume-fa2a9a28-f8fb-4e96-9451-a871c9f188d1 to disappear +Oct 27 14:02:21.224: INFO: Pod downwardapi-volume-fa2a9a28-f8fb-4e96-9451-a871c9f188d1 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:02:21.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-9891" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":346,"completed":10,"skipped":136,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Container Runtime blackbox test on terminated container + should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:02:21.237: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-runtime +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-3389 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the container +STEP: wait for the container to reach Succeeded +STEP: get the container status +STEP: the container should be terminated +STEP: the termination message should be set +Oct 27 14:02:22.413: INFO: Expected: &{} to match Container's Termination Message: -- +STEP: delete the container +[AfterEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:02:22.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-runtime-3389" for this suite. +•{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":346,"completed":11,"skipped":165,"failed":0} +SSSSS +------------------------------ +[sig-node] Security Context When creating a container with runAsUser + should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:02:22.436: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename security-context-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-test-2657 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 +[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:02:22.594: INFO: Waiting up to 5m0s for pod "busybox-user-65534-ec2c842d-645e-4327-b48a-eefb6c4c9eeb" in namespace "security-context-test-2657" to be "Succeeded or Failed" +Oct 27 14:02:22.599: INFO: Pod "busybox-user-65534-ec2c842d-645e-4327-b48a-eefb6c4c9eeb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.196124ms +Oct 27 14:02:24.605: INFO: Pod "busybox-user-65534-ec2c842d-645e-4327-b48a-eefb6c4c9eeb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010515463s +Oct 27 14:02:24.605: INFO: Pod "busybox-user-65534-ec2c842d-645e-4327-b48a-eefb6c4c9eeb" satisfied condition "Succeeded or Failed" +[AfterEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:02:24.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "security-context-test-2657" for this suite. +•{"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":12,"skipped":170,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Pods + should run through the lifecycle of Pods and PodStatus [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:02:24.619: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename pods +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-808 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:188 +[It] should run through the lifecycle of Pods and PodStatus [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a Pod with a static label +STEP: watching for Pod to be ready +Oct 27 14:02:24.789: INFO: observed Pod pod-test in namespace pods-808 in phase Pending with labels: map[test-pod-static:true] & conditions [] +Oct 27 14:02:24.789: INFO: observed Pod pod-test in namespace pods-808 in phase Pending with labels: map[test-pod-static:true] & conditions [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 14:02:24 +0000 UTC }] +Oct 27 14:02:24.800: INFO: observed Pod pod-test in namespace pods-808 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 14:02:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-27 14:02:24 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-27 14:02:24 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 14:02:24 +0000 UTC }] +Oct 27 14:02:25.233: INFO: observed Pod pod-test in namespace pods-808 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 14:02:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-27 14:02:24 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-27 14:02:24 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 14:02:24 +0000 UTC }] +Oct 27 14:02:26.417: INFO: Found Pod pod-test in namespace pods-808 in phase Running with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 14:02:24 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 14:02:26 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 14:02:26 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 14:02:24 +0000 UTC }] +STEP: patching the Pod with a new Label and updated data +Oct 27 14:02:26.430: INFO: observed event type ADDED +STEP: getting the Pod and ensuring that it's patched +STEP: replacing the Pod's status Ready condition to False +STEP: check the Pod again to ensure its Ready conditions are False +STEP: deleting the Pod via a Collection with a LabelSelector +STEP: watching for the Pod to be deleted +Oct 27 14:02:26.455: INFO: observed event type ADDED +Oct 27 14:02:26.455: INFO: observed event type MODIFIED +Oct 27 14:02:26.455: INFO: observed event type MODIFIED +Oct 27 14:02:26.455: INFO: observed event type MODIFIED +Oct 27 14:02:26.455: INFO: observed event type MODIFIED +Oct 27 14:02:26.455: INFO: observed event type MODIFIED +Oct 27 14:02:26.455: INFO: observed event type MODIFIED +Oct 27 14:02:26.455: INFO: observed event type MODIFIED +Oct 27 14:02:28.423: INFO: observed event type MODIFIED +Oct 27 14:02:28.615: INFO: observed event type MODIFIED +Oct 27 14:02:29.428: INFO: observed event type MODIFIED +Oct 27 14:02:29.434: INFO: observed event type MODIFIED +[AfterEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:02:29.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-808" for this suite. +•{"msg":"PASSED [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]","total":346,"completed":13,"skipped":196,"failed":0} +SSS +------------------------------ +[sig-node] Pods + should delete a collection of pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:02:29.448: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename pods +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-5898 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:188 +[It] should delete a collection of pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Create set of pods +Oct 27 14:02:29.608: INFO: created test-pod-1 +Oct 27 14:02:29.617: INFO: created test-pod-2 +Oct 27 14:02:29.626: INFO: created test-pod-3 +STEP: waiting for all 3 pods to be located +STEP: waiting for all pods to be deleted +Oct 27 14:02:29.664: INFO: Pod quantity 3 is different from expected quantity 0 +Oct 27 14:02:30.669: INFO: Pod quantity 3 is different from expected quantity 0 +Oct 27 14:02:31.670: INFO: Pod quantity 3 is different from expected quantity 0 +[AfterEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:02:32.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-5898" for this suite. +•{"msg":"PASSED [sig-node] Pods should delete a collection of pods [Conformance]","total":346,"completed":14,"skipped":199,"failed":0} +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl run pod + should create a pod from an image when restart is Never [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:02:32.684: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-4235 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[BeforeEach] Kubectl run pod + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1524 +[It] should create a pod from an image when restart is Never [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 +Oct 27 14:02:32.833: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-4235 run e2e-test-httpd-pod --restart=Never --pod-running-timeout=2m0s --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-1' +Oct 27 14:02:32.921: INFO: stderr: "" +Oct 27 14:02:32.922: INFO: stdout: "pod/e2e-test-httpd-pod created\n" +STEP: verifying the pod e2e-test-httpd-pod was created +[AfterEach] Kubectl run pod + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1528 +Oct 27 14:02:32.926: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-4235 delete pods e2e-test-httpd-pod' +Oct 27 14:02:41.488: INFO: stderr: "" +Oct 27 14:02:41.488: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:02:41.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-4235" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":346,"completed":15,"skipped":216,"failed":0} +SS +------------------------------ +[sig-node] Probing container + should have monotonically increasing restart count [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:02:41.500: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-probe +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-6220 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 +[It] should have monotonically increasing restart count [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating pod liveness-3dce84c7-7211-41fc-8057-1a819c8bfd69 in namespace container-probe-6220 +Oct 27 14:02:43.677: INFO: Started pod liveness-3dce84c7-7211-41fc-8057-1a819c8bfd69 in namespace container-probe-6220 +STEP: checking the pod's current state and verifying that restartCount is present +Oct 27 14:02:43.682: INFO: Initial restart count of pod liveness-3dce84c7-7211-41fc-8057-1a819c8bfd69 is 0 +Oct 27 14:03:03.745: INFO: Restart count of pod container-probe-6220/liveness-3dce84c7-7211-41fc-8057-1a819c8bfd69 is now 1 (20.062939011s elapsed) +Oct 27 14:03:23.809: INFO: Restart count of pod container-probe-6220/liveness-3dce84c7-7211-41fc-8057-1a819c8bfd69 is now 2 (40.127381576s elapsed) +Oct 27 14:03:43.870: INFO: Restart count of pod container-probe-6220/liveness-3dce84c7-7211-41fc-8057-1a819c8bfd69 is now 3 (1m0.188018528s elapsed) +Oct 27 14:04:03.932: INFO: Restart count of pod container-probe-6220/liveness-3dce84c7-7211-41fc-8057-1a819c8bfd69 is now 4 (1m20.24991021s elapsed) +Oct 27 14:05:04.122: INFO: Restart count of pod container-probe-6220/liveness-3dce84c7-7211-41fc-8057-1a819c8bfd69 is now 5 (2m20.439806797s elapsed) +STEP: deleting the pod +[AfterEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:05:04.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-6220" for this suite. +•{"msg":"PASSED [sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":346,"completed":16,"skipped":218,"failed":0} +SSSSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + should perform rolling updates and roll backs of template modifications [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:05:04.146: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename statefulset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-5018 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:92 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:107 +STEP: Creating service test in namespace statefulset-5018 +[It] should perform rolling updates and roll backs of template modifications [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a new StatefulSet +Oct 27 14:05:04.309: INFO: Found 0 stateful pods, waiting for 3 +Oct 27 14:05:14.367: INFO: Found 2 stateful pods, waiting for 3 +Oct 27 14:05:24.315: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true +Oct 27 14:05:24.315: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true +Oct 27 14:05:24.315: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true +Oct 27 14:05:24.330: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5018 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Oct 27 14:05:24.621: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Oct 27 14:05:24.621: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Oct 27 14:05:24.621: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +STEP: Updating StatefulSet template: update image from k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 to k8s.gcr.io/e2e-test-images/httpd:2.4.39-1 +Oct 27 14:05:34.665: INFO: Updating stateful set ss2 +STEP: Creating a new revision +STEP: Updating Pods in reverse ordinal order +Oct 27 14:05:44.691: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5018 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 14:05:45.054: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Oct 27 14:05:45.054: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Oct 27 14:05:45.054: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Oct 27 14:05:55.086: INFO: Waiting for StatefulSet statefulset-5018/ss2 to complete update +Oct 27 14:05:55.086: INFO: Waiting for Pod statefulset-5018/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 +Oct 27 14:05:55.086: INFO: Waiting for Pod statefulset-5018/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 +Oct 27 14:06:05.097: INFO: Waiting for StatefulSet statefulset-5018/ss2 to complete update +STEP: Rolling back to a previous revision +Oct 27 14:06:15.096: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5018 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Oct 27 14:06:15.421: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Oct 27 14:06:15.421: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Oct 27 14:06:15.421: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Oct 27 14:06:25.465: INFO: Updating stateful set ss2 +STEP: Rolling back update in reverse ordinal order +Oct 27 14:06:35.491: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5018 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 14:06:38.797: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Oct 27 14:06:38.797: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Oct 27 14:06:38.797: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118 +Oct 27 14:06:48.831: INFO: Deleting all statefulset in ns statefulset-5018 +Oct 27 14:06:48.836: INFO: Scaling statefulset ss2 to 0 +Oct 27 14:06:58.862: INFO: Waiting for statefulset status.replicas updated to 0 +Oct 27 14:06:58.867: INFO: Deleting statefulset ss2 +[AfterEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:06:58.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-5018" for this suite. +•{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":346,"completed":17,"skipped":224,"failed":0} + +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a service. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:06:58.897: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename resourcequota +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-3589 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should create a ResourceQuota and capture the life of a service. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Counting existing ResourceQuota +STEP: Creating a ResourceQuota +STEP: Ensuring resource quota status is calculated +STEP: Creating a Service +STEP: Creating a NodePort Service +STEP: Not allowing a LoadBalancer Service with NodePort to be created that exceeds remaining quota +STEP: Ensuring resource quota status captures service creation +STEP: Deleting Services +STEP: Ensuring resource quota status released usage +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:07:10.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-3589" for this suite. +•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":346,"completed":18,"skipped":224,"failed":0} +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Pods + should be submitted and removed [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:07:10.178: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename pods +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-9728 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:188 +[It] should be submitted and removed [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pod +STEP: setting up watch +STEP: submitting the pod to kubernetes +Oct 27 14:07:10.337: INFO: observed the pod list +STEP: verifying the pod is in kubernetes +STEP: verifying pod creation was observed +STEP: deleting the pod gracefully +STEP: verifying pod deletion was observed +[AfterEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:07:15.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-9728" for this suite. +•{"msg":"PASSED [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]","total":346,"completed":19,"skipped":241,"failed":0} +SSSSSSSSS +------------------------------ +[sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath + runs ReplicaSets to verify preemption running path [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:07:15.573: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename sched-preemption +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-preemption-9847 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 +Oct 27 14:07:15.733: INFO: Waiting up to 1m0s for all nodes to be ready +Oct 27 14:08:15.792: INFO: Waiting for terminating namespaces to be deleted... +[BeforeEach] PreemptionExecutionPath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:08:15.797: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename sched-preemption-path +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-preemption-path-2295 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] PreemptionExecutionPath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:488 +STEP: Finding an available node +STEP: Trying to launch a pod without a label to get a node which can launch it. +STEP: Explicitly delete pod here to free the resource it takes. +Oct 27 14:08:19.989: INFO: found a healthy node: izgw89f23rpcwrl79tpgp1z +[It] runs ReplicaSets to verify preemption running path [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:08:32.077: INFO: pods created so far: [1 1 1] +Oct 27 14:08:32.077: INFO: length of pods created so far: 3 +Oct 27 14:08:34.098: INFO: pods created so far: [2 2 1] +[AfterEach] PreemptionExecutionPath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:08:41.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-preemption-path-2295" for this suite. +[AfterEach] PreemptionExecutionPath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:462 +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:08:41.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-preemption-9847" for this suite. +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 +•{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":346,"completed":20,"skipped":250,"failed":0} +SSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl api-versions + should check if v1 is in available api versions [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:08:41.194: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-8300 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should check if v1 is in available api versions [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: validating api versions +Oct 27 14:08:41.347: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-8300 api-versions' +Oct 27 14:08:41.444: INFO: stderr: "" +Oct 27 14:08:41.444: INFO: stdout: "admissionregistration.k8s.io/v1\napiextensions.k8s.io/v1\napiregistration.k8s.io/v1\napps/v1\nauthentication.k8s.io/v1\nauthorization.k8s.io/v1\nautoscaling.k8s.io/v1\nautoscaling.k8s.io/v1beta2\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncert.gardener.cloud/v1alpha1\ncertificates.k8s.io/v1\ncoordination.k8s.io/v1\ncrd.projectcalico.org/v1\ndiscovery.k8s.io/v1\ndiscovery.k8s.io/v1beta1\ndns.gardener.cloud/v1alpha1\nevents.k8s.io/v1\nevents.k8s.io/v1beta1\nflowcontrol.apiserver.k8s.io/v1beta1\nmetrics.k8s.io/v1beta1\nnetworking.k8s.io/v1\nnode.k8s.io/v1\nnode.k8s.io/v1beta1\npolicy/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nscheduling.k8s.io/v1\nsnapshot.storage.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:08:41.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-8300" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":346,"completed":21,"skipped":259,"failed":0} + +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for multiple CRDs of same group and version but different kinds [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:08:41.465: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-9665 +STEP: Waiting for a default service account to be provisioned in namespace +[It] works for multiple CRDs of same group and version but different kinds [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation +Oct 27 14:08:41.615: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 14:08:45.134: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:08:59.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-9665" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":346,"completed":22,"skipped":259,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:08:59.733: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-8959 +STEP: Waiting for a default service account to be provisioned in namespace +[It] optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating secret with name s-test-opt-del-d48cfa6b-812d-4126-a16b-5c0bee07fdcd +STEP: Creating secret with name s-test-opt-upd-d63dd439-6e68-44c8-9717-dc43256bac29 +STEP: Creating the pod +Oct 27 14:08:59.919: INFO: The status of Pod pod-projected-secrets-5ed4530b-cfcb-4249-95f9-90053d2137f3 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:09:01.925: INFO: The status of Pod pod-projected-secrets-5ed4530b-cfcb-4249-95f9-90053d2137f3 is Running (Ready = true) +STEP: Deleting secret s-test-opt-del-d48cfa6b-812d-4126-a16b-5c0bee07fdcd +STEP: Updating secret s-test-opt-upd-d63dd439-6e68-44c8-9717-dc43256bac29 +STEP: Creating secret with name s-test-opt-create-0a8c059c-e6ad-41fa-ac38-8a4155bb8473 +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:09:04.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-8959" for this suite. +•{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":23,"skipped":339,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should rollback without unnecessary restarts [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:09:04.182: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename daemonsets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in daemonsets-2394 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:142 +[It] should rollback without unnecessary restarts [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:09:04.360: INFO: Create a RollingUpdate DaemonSet +Oct 27 14:09:04.366: INFO: Check that daemon pods launch on every node of the cluster +Oct 27 14:09:04.375: INFO: Number of nodes with available pods: 0 +Oct 27 14:09:04.375: INFO: Node izgw81stpxs0bun38i01tfz is running more than one daemon pod +Oct 27 14:09:05.389: INFO: Number of nodes with available pods: 0 +Oct 27 14:09:05.389: INFO: Node izgw81stpxs0bun38i01tfz is running more than one daemon pod +Oct 27 14:09:06.388: INFO: Number of nodes with available pods: 2 +Oct 27 14:09:06.388: INFO: Number of running nodes: 2, number of available pods: 2 +Oct 27 14:09:06.389: INFO: Update the DaemonSet to trigger a rollout +Oct 27 14:09:06.400: INFO: Updating DaemonSet daemon-set +Oct 27 14:09:09.422: INFO: Roll back the DaemonSet before rollout is complete +Oct 27 14:09:09.431: INFO: Updating DaemonSet daemon-set +Oct 27 14:09:09.431: INFO: Make sure DaemonSet rollback is complete +Oct 27 14:09:09.437: INFO: Wrong image for pod: daemon-set-l6f85. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. +Oct 27 14:09:09.437: INFO: Pod daemon-set-l6f85 is not available +Oct 27 14:09:13.467: INFO: Pod daemon-set-2w4pl is not available +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:108 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2394, will wait for the garbage collector to delete the pods +Oct 27 14:09:13.543: INFO: Deleting DaemonSet.extensions daemon-set took: 5.791533ms +Oct 27 14:09:13.644: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.854139ms +Oct 27 14:09:15.250: INFO: Number of nodes with available pods: 0 +Oct 27 14:09:15.250: INFO: Number of running nodes: 0, number of available pods: 0 +Oct 27 14:09:15.255: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"7201"},"items":null} + +Oct 27 14:09:15.259: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"7201"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:09:15.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-2394" for this suite. +•{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":346,"completed":24,"skipped":399,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should orphan pods created by rc if delete options say so [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:09:15.287: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename gc +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-8818 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should orphan pods created by rc if delete options say so [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the rc +STEP: delete the rc +STEP: wait for the rc to be deleted +STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods +STEP: Gathering metrics +Oct 27 14:09:55.531: INFO: For apiserver_request_total: +For apiserver_request_latency_seconds: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +W1027 14:09:55.531581 5703 metrics_grabber.go:151] Can't find kube-controller-manager pod. Grabbing metrics from kube-controller-manager is disabled. +Oct 27 14:09:55.531: INFO: Deleting pod "simpletest.rc-7chsg" in namespace "gc-8818" +Oct 27 14:09:55.541: INFO: Deleting pod "simpletest.rc-9f599" in namespace "gc-8818" +Oct 27 14:09:55.559: INFO: Deleting pod "simpletest.rc-hpldz" in namespace "gc-8818" +Oct 27 14:09:55.567: INFO: Deleting pod "simpletest.rc-jdstn" in namespace "gc-8818" +Oct 27 14:09:55.574: INFO: Deleting pod "simpletest.rc-krgk6" in namespace "gc-8818" +Oct 27 14:09:55.582: INFO: Deleting pod "simpletest.rc-ndftf" in namespace "gc-8818" +Oct 27 14:09:55.590: INFO: Deleting pod "simpletest.rc-sm5x4" in namespace "gc-8818" +Oct 27 14:09:55.596: INFO: Deleting pod "simpletest.rc-szsw9" in namespace "gc-8818" +Oct 27 14:09:55.604: INFO: Deleting pod "simpletest.rc-vkcxn" in namespace "gc-8818" +Oct 27 14:09:55.611: INFO: Deleting pod "simpletest.rc-xtmc7" in namespace "gc-8818" +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:09:55.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-8818" for this suite. +•{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":346,"completed":25,"skipped":410,"failed":0} +SS +------------------------------ +[sig-api-machinery] Aggregator + Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Aggregator + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:09:55.629: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename aggregator +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in aggregator-2705 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] Aggregator + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:77 +Oct 27 14:09:55.797: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +[It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Registering the sample API server. +Oct 27 14:09:56.256: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set +Oct 27 14:09:58.302: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940596, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940596, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940596, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940596, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 27 14:10:00.307: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940596, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940596, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940596, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940596, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 27 14:10:02.309: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940596, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940596, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940596, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940596, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 27 14:10:04.310: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940596, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940596, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940596, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940596, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 27 14:10:06.309: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940596, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940596, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940596, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940596, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 27 14:10:08.309: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940596, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940596, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940596, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940596, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 27 14:10:10.308: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940596, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940596, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940596, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940596, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 27 14:10:12.309: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940596, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940596, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940596, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940596, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 27 14:10:15.768: INFO: Waited 1.454412647s for the sample-apiserver to be ready to handle requests. +STEP: Read Status for v1alpha1.wardle.example.com +STEP: kubectl patch apiservice v1alpha1.wardle.example.com -p '{"spec":{"versionPriority": 400}}' +STEP: List APIServices +Oct 27 14:10:16.067: INFO: Found v1alpha1.wardle.example.com in APIServiceList +[AfterEach] [sig-api-machinery] Aggregator + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:68 +[AfterEach] [sig-api-machinery] Aggregator + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:10:16.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "aggregator-2705" for this suite. +•{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":346,"completed":26,"skipped":412,"failed":0} + +------------------------------ +[sig-storage] Secrets + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:10:16.901: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-9320 +STEP: Waiting for a default service account to be provisioned in namespace +[It] optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating secret with name s-test-opt-del-773c9256-211c-4a59-8f46-78265a53a8af +STEP: Creating secret with name s-test-opt-upd-5ab26302-6d6c-47cc-9303-b4b79026b838 +STEP: Creating the pod +Oct 27 14:10:17.108: INFO: The status of Pod pod-secrets-b38943ca-c94f-4a26-9c16-2a4d258fe33a is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:10:19.114: INFO: The status of Pod pod-secrets-b38943ca-c94f-4a26-9c16-2a4d258fe33a is Running (Ready = true) +STEP: Deleting secret s-test-opt-del-773c9256-211c-4a59-8f46-78265a53a8af +STEP: Updating secret s-test-opt-upd-5ab26302-6d6c-47cc-9303-b4b79026b838 +STEP: Creating secret with name s-test-opt-create-2e907658-98b3-485f-81d9-0cb4597cc20e +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:10:21.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-9320" for this suite. +•{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":27,"skipped":412,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Namespaces [Serial] + should ensure that all pods are removed when a namespace is deleted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:10:21.316: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename namespaces +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in namespaces-5412 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should ensure that all pods are removed when a namespace is deleted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a test namespace +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nsdeletetest-513 +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Creating a pod in the namespace +STEP: Waiting for the pod to have running status +STEP: Deleting the namespace +STEP: Waiting for the namespace to be removed. +STEP: Recreating the namespace +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nsdeletetest-7004 +STEP: Verifying there are no pods in the namespace +[AfterEach] [sig-api-machinery] Namespaces [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:10:34.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "namespaces-5412" for this suite. +STEP: Destroying namespace "nsdeletetest-513" for this suite. +Oct 27 14:10:34.806: INFO: Namespace nsdeletetest-513 was already deleted +STEP: Destroying namespace "nsdeletetest-7004" for this suite. +•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":346,"completed":28,"skipped":428,"failed":0} +SSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a pod. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:10:34.811: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename resourcequota +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-8654 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should create a ResourceQuota and capture the life of a pod. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Counting existing ResourceQuota +STEP: Creating a ResourceQuota +STEP: Ensuring resource quota status is calculated +STEP: Creating a Pod that fits quota +STEP: Ensuring ResourceQuota status captures the pod usage +STEP: Not allowing a pod to be created that exceeds remaining quota +STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) +STEP: Ensuring a pod cannot update its resource requirements +STEP: Ensuring attempts to update pod resource requirements did not change quota usage +STEP: Deleting the pod +STEP: Ensuring resource quota status released the pod usage +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:10:48.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-8654" for this suite. +•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":346,"completed":29,"skipped":434,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Deployment + should validate Deployment Status endpoints [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:10:48.069: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename deployment +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in deployment-4193 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 +[It] should validate Deployment Status endpoints [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a Deployment +Oct 27 14:10:48.230: INFO: Creating simple deployment test-deployment-fcrq5 +Oct 27 14:10:48.252: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940648, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940648, loc:(*time.Location)(0xa09bc80)}}, Reason:"NewReplicaSetCreated", Message:"Created new replica set \"test-deployment-fcrq5-794dd694d8\""}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940648, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940648, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} +STEP: Getting /status +Oct 27 14:10:50.268: INFO: Deployment test-deployment-fcrq5 has Conditions: [{Available True 2021-10-27 14:10:49 +0000 UTC 2021-10-27 14:10:49 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2021-10-27 14:10:49 +0000 UTC 2021-10-27 14:10:48 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-fcrq5-794dd694d8" has successfully progressed.}] +STEP: updating Deployment Status +Oct 27 14:10:50.282: INFO: updatedStatus.Conditions: []v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940649, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940649, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940649, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940648, loc:(*time.Location)(0xa09bc80)}}, Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"test-deployment-fcrq5-794dd694d8\" has successfully progressed."}, v1.DeploymentCondition{Type:"StatusUpdate", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"E2E", Message:"Set from e2e test"}} +STEP: watching for the Deployment status to be updated +Oct 27 14:10:50.287: INFO: Observed &Deployment event: ADDED +Oct 27 14:10:50.287: INFO: Observed Deployment test-deployment-fcrq5 in namespace deployment-4193 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2021-10-27 14:10:48 +0000 UTC 2021-10-27 14:10:48 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-fcrq5-794dd694d8"} +Oct 27 14:10:50.287: INFO: Observed &Deployment event: MODIFIED +Oct 27 14:10:50.287: INFO: Observed Deployment test-deployment-fcrq5 in namespace deployment-4193 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2021-10-27 14:10:48 +0000 UTC 2021-10-27 14:10:48 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-fcrq5-794dd694d8"} +Oct 27 14:10:50.287: INFO: Observed Deployment test-deployment-fcrq5 in namespace deployment-4193 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2021-10-27 14:10:48 +0000 UTC 2021-10-27 14:10:48 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} +Oct 27 14:10:50.287: INFO: Observed &Deployment event: MODIFIED +Oct 27 14:10:50.287: INFO: Observed Deployment test-deployment-fcrq5 in namespace deployment-4193 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2021-10-27 14:10:48 +0000 UTC 2021-10-27 14:10:48 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} +Oct 27 14:10:50.287: INFO: Observed Deployment test-deployment-fcrq5 in namespace deployment-4193 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2021-10-27 14:10:48 +0000 UTC 2021-10-27 14:10:48 +0000 UTC ReplicaSetUpdated ReplicaSet "test-deployment-fcrq5-794dd694d8" is progressing.} +Oct 27 14:10:50.287: INFO: Observed &Deployment event: MODIFIED +Oct 27 14:10:50.287: INFO: Observed Deployment test-deployment-fcrq5 in namespace deployment-4193 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2021-10-27 14:10:49 +0000 UTC 2021-10-27 14:10:49 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} +Oct 27 14:10:50.287: INFO: Observed Deployment test-deployment-fcrq5 in namespace deployment-4193 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2021-10-27 14:10:49 +0000 UTC 2021-10-27 14:10:48 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-fcrq5-794dd694d8" has successfully progressed.} +Oct 27 14:10:50.288: INFO: Observed &Deployment event: MODIFIED +Oct 27 14:10:50.288: INFO: Observed Deployment test-deployment-fcrq5 in namespace deployment-4193 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2021-10-27 14:10:49 +0000 UTC 2021-10-27 14:10:49 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} +Oct 27 14:10:50.288: INFO: Observed Deployment test-deployment-fcrq5 in namespace deployment-4193 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2021-10-27 14:10:49 +0000 UTC 2021-10-27 14:10:48 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-fcrq5-794dd694d8" has successfully progressed.} +Oct 27 14:10:50.288: INFO: Found Deployment test-deployment-fcrq5 in namespace deployment-4193 with labels: map[e2e:testing name:httpd] annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} +Oct 27 14:10:50.288: INFO: Deployment test-deployment-fcrq5 has an updated status +STEP: patching the Statefulset Status +Oct 27 14:10:50.288: INFO: Patch payload: {"status":{"conditions":[{"type":"StatusPatched","status":"True"}]}} +Oct 27 14:10:50.294: INFO: Patched status conditions: []v1.DeploymentCondition{v1.DeploymentCondition{Type:"StatusPatched", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"", Message:""}} +STEP: watching for the Deployment status to be patched +Oct 27 14:10:50.298: INFO: Observed &Deployment event: ADDED +Oct 27 14:10:50.298: INFO: Observed deployment test-deployment-fcrq5 in namespace deployment-4193 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2021-10-27 14:10:48 +0000 UTC 2021-10-27 14:10:48 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-fcrq5-794dd694d8"} +Oct 27 14:10:50.298: INFO: Observed &Deployment event: MODIFIED +Oct 27 14:10:50.298: INFO: Observed deployment test-deployment-fcrq5 in namespace deployment-4193 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2021-10-27 14:10:48 +0000 UTC 2021-10-27 14:10:48 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-fcrq5-794dd694d8"} +Oct 27 14:10:50.298: INFO: Observed deployment test-deployment-fcrq5 in namespace deployment-4193 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2021-10-27 14:10:48 +0000 UTC 2021-10-27 14:10:48 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} +Oct 27 14:10:50.298: INFO: Observed &Deployment event: MODIFIED +Oct 27 14:10:50.298: INFO: Observed deployment test-deployment-fcrq5 in namespace deployment-4193 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2021-10-27 14:10:48 +0000 UTC 2021-10-27 14:10:48 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} +Oct 27 14:10:50.298: INFO: Observed deployment test-deployment-fcrq5 in namespace deployment-4193 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2021-10-27 14:10:48 +0000 UTC 2021-10-27 14:10:48 +0000 UTC ReplicaSetUpdated ReplicaSet "test-deployment-fcrq5-794dd694d8" is progressing.} +Oct 27 14:10:50.298: INFO: Observed &Deployment event: MODIFIED +Oct 27 14:10:50.298: INFO: Observed deployment test-deployment-fcrq5 in namespace deployment-4193 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2021-10-27 14:10:49 +0000 UTC 2021-10-27 14:10:49 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} +Oct 27 14:10:50.298: INFO: Observed deployment test-deployment-fcrq5 in namespace deployment-4193 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2021-10-27 14:10:49 +0000 UTC 2021-10-27 14:10:48 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-fcrq5-794dd694d8" has successfully progressed.} +Oct 27 14:10:50.298: INFO: Observed &Deployment event: MODIFIED +Oct 27 14:10:50.298: INFO: Observed deployment test-deployment-fcrq5 in namespace deployment-4193 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2021-10-27 14:10:49 +0000 UTC 2021-10-27 14:10:49 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} +Oct 27 14:10:50.298: INFO: Observed deployment test-deployment-fcrq5 in namespace deployment-4193 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2021-10-27 14:10:49 +0000 UTC 2021-10-27 14:10:48 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-fcrq5-794dd694d8" has successfully progressed.} +Oct 27 14:10:50.298: INFO: Observed deployment test-deployment-fcrq5 in namespace deployment-4193 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} +Oct 27 14:10:50.298: INFO: Observed &Deployment event: MODIFIED +Oct 27 14:10:50.298: INFO: Found deployment test-deployment-fcrq5 in namespace deployment-4193 with labels: map[e2e:testing name:httpd] annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusPatched True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } +Oct 27 14:10:50.298: INFO: Deployment test-deployment-fcrq5 has a patched status +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 +Oct 27 14:10:50.303: INFO: Deployment "test-deployment-fcrq5": +&Deployment{ObjectMeta:{test-deployment-fcrq5 deployment-4193 5a66dfd2-46c3-43dd-8415-f18614aecc66 7955 1 2021-10-27 14:10:48 +0000 UTC map[e2e:testing name:httpd] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2021-10-27 14:10:48 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {e2e.test Update apps/v1 2021-10-27 14:10:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"StatusPatched\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update apps/v1 2021-10-27 14:10:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{e2e: testing,name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[e2e:testing name:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003d356c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:StatusPatched,Status:True,Reason:,Message:,LastUpdateTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:FoundNewReplicaSet,Message:Found new replica set "test-deployment-fcrq5-794dd694d8",LastUpdateTime:2021-10-27 14:10:50 +0000 UTC,LastTransitionTime:2021-10-27 14:10:50 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} + +Oct 27 14:10:50.307: INFO: New ReplicaSet "test-deployment-fcrq5-794dd694d8" of Deployment "test-deployment-fcrq5": +&ReplicaSet{ObjectMeta:{test-deployment-fcrq5-794dd694d8 deployment-4193 d8843094-fb46-41fa-9f85-26d07d48ee29 7950 1 2021-10-27 14:10:48 +0000 UTC map[e2e:testing name:httpd pod-template-hash:794dd694d8] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-deployment-fcrq5 5a66dfd2-46c3-43dd-8415-f18614aecc66 0xc003d35c27 0xc003d35c28}] [] [{kube-controller-manager Update apps/v1 2021-10-27 14:10:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5a66dfd2-46c3-43dd-8415-f18614aecc66\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 14:10:49 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{e2e: testing,name: httpd,pod-template-hash: 794dd694d8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[e2e:testing name:httpd pod-template-hash:794dd694d8] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003d35d28 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} +Oct 27 14:10:50.312: INFO: Pod "test-deployment-fcrq5-794dd694d8-vlq7q" is available: +&Pod{ObjectMeta:{test-deployment-fcrq5-794dd694d8-vlq7q test-deployment-fcrq5-794dd694d8- deployment-4193 7d0700e5-eb87-4f25-a981-4abc91958feb 7949 0 2021-10-27 14:10:48 +0000 UTC map[e2e:testing name:httpd pod-template-hash:794dd694d8] map[cni.projectcalico.org/containerID:05f223346f6c72981fe8726ff8274cbb6187aefa1ee415049de40528cb930edd cni.projectcalico.org/podIP:172.16.1.48/32 cni.projectcalico.org/podIPs:172.16.1.48/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet test-deployment-fcrq5-794dd694d8 d8843094-fb46-41fa-9f85-26d07d48ee29 0xc003d562b7 0xc003d562b8}] [] [{calico Update v1 2021-10-27 14:10:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kube-controller-manager Update v1 2021-10-27 14:10:48 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d8843094-fb46-41fa-9f85-26d07d48ee29\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 14:10:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.16.1.48\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-j5gnp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmanu-jzf.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j5gnp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:izgw89f23rpcwrl79tpgp1z,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:10:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:10:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:10:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:10:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.8.35,PodIP:172.16.1.48,StartTime:2021-10-27 14:10:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-27 14:10:48 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://372da4164bf6b9e82d319759207393af3264b05368da4a272b7172d69728cbdd,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.16.1.48,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:10:50.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-4193" for this suite. +•{"msg":"PASSED [sig-apps] Deployment should validate Deployment Status endpoints [Conformance]","total":346,"completed":30,"skipped":475,"failed":0} +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:10:50.323: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename gc +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-8647 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the rc +STEP: delete the rc +STEP: wait for the rc to be deleted +STEP: Gathering metrics +Oct 27 14:10:56.517: INFO: For apiserver_request_total: +For apiserver_request_latency_seconds: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:10:56.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +W1027 14:10:56.517671 5703 metrics_grabber.go:151] Can't find kube-controller-manager pod. Grabbing metrics from kube-controller-manager is disabled. +STEP: Destroying namespace "gc-8647" for this suite. +•{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":346,"completed":31,"skipped":496,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl label + should update the label on a resource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:10:56.530: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-590 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[BeforeEach] Kubectl label + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1318 +STEP: creating the pod +Oct 27 14:10:56.682: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-590 create -f -' +Oct 27 14:10:57.262: INFO: stderr: "" +Oct 27 14:10:57.262: INFO: stdout: "pod/pause created\n" +Oct 27 14:10:57.262: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] +Oct 27 14:10:57.262: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-590" to be "running and ready" +Oct 27 14:10:57.267: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.821774ms +Oct 27 14:10:59.273: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 2.011627461s +Oct 27 14:10:59.273: INFO: Pod "pause" satisfied condition "running and ready" +Oct 27 14:10:59.274: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] +[It] should update the label on a resource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: adding the label testing-label with value testing-label-value to a pod +Oct 27 14:10:59.274: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-590 label pods pause testing-label=testing-label-value' +Oct 27 14:10:59.369: INFO: stderr: "" +Oct 27 14:10:59.369: INFO: stdout: "pod/pause labeled\n" +STEP: verifying the pod has the label testing-label with the value testing-label-value +Oct 27 14:10:59.369: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-590 get pod pause -L testing-label' +Oct 27 14:10:59.446: INFO: stderr: "" +Oct 27 14:10:59.446: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 2s testing-label-value\n" +STEP: removing the label testing-label of a pod +Oct 27 14:10:59.446: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-590 label pods pause testing-label-' +Oct 27 14:10:59.520: INFO: stderr: "" +Oct 27 14:10:59.520: INFO: stdout: "pod/pause labeled\n" +STEP: verifying the pod doesn't have the label testing-label +Oct 27 14:10:59.520: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-590 get pod pause -L testing-label' +Oct 27 14:10:59.605: INFO: stderr: "" +Oct 27 14:10:59.605: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 2s \n" +[AfterEach] Kubectl label + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1324 +STEP: using delete to clean up resources +Oct 27 14:10:59.605: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-590 delete --grace-period=0 --force -f -' +Oct 27 14:10:59.703: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Oct 27 14:10:59.703: INFO: stdout: "pod \"pause\" force deleted\n" +Oct 27 14:10:59.703: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-590 get rc,svc -l name=pause --no-headers' +Oct 27 14:10:59.796: INFO: stderr: "No resources found in kubectl-590 namespace.\n" +Oct 27 14:10:59.796: INFO: stdout: "" +Oct 27 14:10:59.796: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-590 get pods -l name=pause -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' +Oct 27 14:10:59.905: INFO: stderr: "" +Oct 27 14:10:59.905: INFO: stdout: "" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:10:59.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-590" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":346,"completed":32,"skipped":511,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should test the lifecycle of an Endpoint [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:10:59.919: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-9770 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should test the lifecycle of an Endpoint [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating an Endpoint +STEP: waiting for available Endpoint +STEP: listing all Endpoints +STEP: updating the Endpoint +STEP: fetching the Endpoint +STEP: patching the Endpoint +STEP: fetching the Endpoint +STEP: deleting the Endpoint by Collection +STEP: waiting for Endpoint deletion +STEP: fetching the Endpoint +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:11:00.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-9770" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":346,"completed":33,"skipped":524,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] EndpointSlice + should support creating EndpointSlice API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:11:00.142: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename endpointslice +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in endpointslice-5640 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 +[It] should support creating EndpointSlice API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: getting /apis +STEP: getting /apis/discovery.k8s.io +STEP: getting /apis/discovery.k8s.iov1 +STEP: creating +STEP: getting +STEP: listing +STEP: watching +Oct 27 14:11:00.329: INFO: starting watch +STEP: cluster-wide listing +STEP: cluster-wide watching +Oct 27 14:11:00.337: INFO: starting watch +STEP: patching +STEP: updating +Oct 27 14:11:00.358: INFO: waiting for watch events with expected annotations +Oct 27 14:11:00.358: INFO: saw patched and updated annotations +STEP: deleting +STEP: deleting a collection +[AfterEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:11:00.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "endpointslice-5640" for this suite. +•{"msg":"PASSED [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","total":346,"completed":34,"skipped":580,"failed":0} +SSSSSSSS +------------------------------ +[sig-node] NoExecuteTaintManager Multiple Pods [Serial] + evicts pods with minTolerationSeconds [Disruptive] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:11:00.398: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename taint-multiple-pods +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in taint-multiple-pods-9553 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/taints.go:345 +Oct 27 14:11:00.548: INFO: Waiting up to 1m0s for all nodes to be ready +Oct 27 14:12:00.593: INFO: Waiting for terminating namespaces to be deleted... +[It] evicts pods with minTolerationSeconds [Disruptive] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:12:00.598: INFO: Starting informer... +STEP: Starting pods... +Oct 27 14:12:00.823: INFO: Pod1 is running on izgw89f23rpcwrl79tpgp1z. Tainting Node +Oct 27 14:12:02.851: INFO: Pod2 is running on izgw89f23rpcwrl79tpgp1z. Tainting Node +STEP: Trying to apply a taint on the Node +STEP: verifying the node has the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute +STEP: Waiting for Pod1 and Pod2 to be deleted +Oct 27 14:12:09.063: INFO: Noticed Pod "taint-eviction-b1" gets evicted. +Oct 27 14:12:29.088: INFO: Noticed Pod "taint-eviction-b2" gets evicted. +STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute +[AfterEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:12:29.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "taint-multiple-pods-9553" for this suite. +•{"msg":"PASSED [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]","total":346,"completed":35,"skipped":588,"failed":0} +SS +------------------------------ +[sig-storage] Downward API volume + should provide container's memory limit [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:12:29.119: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-1684 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should provide container's memory limit [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 27 14:12:29.280: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ea5cb889-cd49-463e-b921-ce341f6cbf12" in namespace "downward-api-1684" to be "Succeeded or Failed" +Oct 27 14:12:29.285: INFO: Pod "downwardapi-volume-ea5cb889-cd49-463e-b921-ce341f6cbf12": Phase="Pending", Reason="", readiness=false. Elapsed: 5.303424ms +Oct 27 14:12:31.290: INFO: Pod "downwardapi-volume-ea5cb889-cd49-463e-b921-ce341f6cbf12": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010640867s +STEP: Saw pod success +Oct 27 14:12:31.290: INFO: Pod "downwardapi-volume-ea5cb889-cd49-463e-b921-ce341f6cbf12" satisfied condition "Succeeded or Failed" +Oct 27 14:12:31.299: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod downwardapi-volume-ea5cb889-cd49-463e-b921-ce341f6cbf12 container client-container: +STEP: delete the pod +Oct 27 14:12:32.338: INFO: Waiting for pod downwardapi-volume-ea5cb889-cd49-463e-b921-ce341f6cbf12 to disappear +Oct 27 14:12:32.343: INFO: Pod downwardapi-volume-ea5cb889-cd49-463e-b921-ce341f6cbf12 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:12:32.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-1684" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":346,"completed":36,"skipped":590,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-network] Services + should provide secure master service [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:12:32.356: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-9792 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should provide secure master service [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:12:32.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-9792" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":346,"completed":37,"skipped":600,"failed":0} +SS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:12:32.528: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename statefulset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-9250 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:92 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:107 +STEP: Creating service test in namespace statefulset-9250 +[It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating stateful set ss in namespace statefulset-9250 +STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9250 +Oct 27 14:12:32.689: INFO: Found 0 stateful pods, waiting for 1 +Oct 27 14:12:42.697: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod +Oct 27 14:12:42.703: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-9250 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Oct 27 14:12:43.079: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Oct 27 14:12:43.079: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Oct 27 14:12:43.079: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Oct 27 14:12:43.085: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true +Oct 27 14:12:53.093: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false +Oct 27 14:12:53.093: INFO: Waiting for statefulset status.replicas updated to 0 +Oct 27 14:12:53.116: INFO: POD NODE PHASE GRACE CONDITIONS +Oct 27 14:12:53.116: INFO: ss-0 izgw89f23rpcwrl79tpgp1z Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 14:12:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-27 14:12:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-27 14:12:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 14:12:32 +0000 UTC }] +Oct 27 14:12:53.117: INFO: +Oct 27 14:12:53.117: INFO: StatefulSet ss has not reached scale 3, at 1 +Oct 27 14:12:54.123: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.990595813s +Oct 27 14:12:55.130: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.983948592s +Oct 27 14:12:56.136: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.977631655s +Oct 27 14:12:57.142: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.971743013s +Oct 27 14:12:58.149: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.965750603s +Oct 27 14:12:59.155: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.958251533s +Oct 27 14:13:00.161: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.952733943s +Oct 27 14:13:01.167: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.94703904s +Oct 27 14:13:02.174: INFO: Verifying statefulset ss doesn't scale past 3 for another 940.374358ms +STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9250 +Oct 27 14:13:03.181: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-9250 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 14:13:03.487: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Oct 27 14:13:03.487: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Oct 27 14:13:03.487: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Oct 27 14:13:03.487: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-9250 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 14:13:03.769: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" +Oct 27 14:13:03.769: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Oct 27 14:13:03.769: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Oct 27 14:13:03.770: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-9250 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 14:13:04.049: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" +Oct 27 14:13:04.049: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Oct 27 14:13:04.049: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Oct 27 14:13:04.055: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +Oct 27 14:13:04.055: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true +Oct 27 14:13:04.055: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true +STEP: Scale down will not halt with unhealthy stateful pod +Oct 27 14:13:04.061: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-9250 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Oct 27 14:13:04.364: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Oct 27 14:13:04.364: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Oct 27 14:13:04.364: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Oct 27 14:13:04.364: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-9250 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Oct 27 14:13:04.672: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Oct 27 14:13:04.672: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Oct 27 14:13:04.672: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Oct 27 14:13:04.672: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-9250 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Oct 27 14:13:04.990: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Oct 27 14:13:04.990: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Oct 27 14:13:04.990: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Oct 27 14:13:04.991: INFO: Waiting for statefulset status.replicas updated to 0 +Oct 27 14:13:04.995: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 +Oct 27 14:13:15.006: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false +Oct 27 14:13:15.006: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false +Oct 27 14:13:15.006: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false +Oct 27 14:13:15.023: INFO: POD NODE PHASE GRACE CONDITIONS +Oct 27 14:13:15.023: INFO: ss-0 izgw89f23rpcwrl79tpgp1z Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 14:12:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-27 14:13:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-27 14:13:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 14:12:32 +0000 UTC }] +Oct 27 14:13:15.023: INFO: ss-1 izgw89f23rpcwrl79tpgp1z Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 14:12:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-27 14:13:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-27 14:13:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 14:12:53 +0000 UTC }] +Oct 27 14:13:15.023: INFO: ss-2 izgw81stpxs0bun38i01tfz Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 14:12:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-27 14:13:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-27 14:13:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 14:12:53 +0000 UTC }] +Oct 27 14:13:15.023: INFO: +Oct 27 14:13:15.023: INFO: StatefulSet ss has not reached scale 0, at 3 +Oct 27 14:13:16.029: INFO: POD NODE PHASE GRACE CONDITIONS +Oct 27 14:13:16.029: INFO: ss-0 izgw89f23rpcwrl79tpgp1z Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 14:12:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-27 14:13:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-27 14:13:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 14:12:32 +0000 UTC }] +Oct 27 14:13:16.029: INFO: ss-1 izgw89f23rpcwrl79tpgp1z Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 14:12:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-27 14:13:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-27 14:13:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 14:12:53 +0000 UTC }] +Oct 27 14:13:16.029: INFO: +Oct 27 14:13:16.029: INFO: StatefulSet ss has not reached scale 0, at 2 +Oct 27 14:13:17.035: INFO: Verifying statefulset ss doesn't scale past 0 for another 7.985788338s +Oct 27 14:13:18.041: INFO: Verifying statefulset ss doesn't scale past 0 for another 6.97984492s +Oct 27 14:13:19.048: INFO: Verifying statefulset ss doesn't scale past 0 for another 5.973473984s +Oct 27 14:13:20.054: INFO: Verifying statefulset ss doesn't scale past 0 for another 4.96694388s +Oct 27 14:13:21.059: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.960983566s +Oct 27 14:13:22.065: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.955793137s +Oct 27 14:13:23.071: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.94936397s +Oct 27 14:13:24.079: INFO: Verifying statefulset ss doesn't scale past 0 for another 941.648485ms +STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9250 +Oct 27 14:13:25.090: INFO: Scaling statefulset ss to 0 +Oct 27 14:13:25.105: INFO: Waiting for statefulset status.replicas updated to 0 +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118 +Oct 27 14:13:25.110: INFO: Deleting all statefulset in ns statefulset-9250 +Oct 27 14:13:25.114: INFO: Scaling statefulset ss to 0 +Oct 27 14:13:25.133: INFO: Waiting for statefulset status.replicas updated to 0 +Oct 27 14:13:25.137: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:13:25.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-9250" for this suite. +•{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":346,"completed":38,"skipped":602,"failed":0} +SS +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:13:25.165: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-2405 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating projection with secret that has name projected-secret-test-map-97553e38-59e1-4aa7-a402-e497eb51ae8a +STEP: Creating a pod to test consume secrets +Oct 27 14:13:25.329: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-146cef57-ac2f-4283-ac71-a3ca83cc732c" in namespace "projected-2405" to be "Succeeded or Failed" +Oct 27 14:13:25.334: INFO: Pod "pod-projected-secrets-146cef57-ac2f-4283-ac71-a3ca83cc732c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.911356ms +Oct 27 14:13:27.339: INFO: Pod "pod-projected-secrets-146cef57-ac2f-4283-ac71-a3ca83cc732c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010549707s +STEP: Saw pod success +Oct 27 14:13:27.339: INFO: Pod "pod-projected-secrets-146cef57-ac2f-4283-ac71-a3ca83cc732c" satisfied condition "Succeeded or Failed" +Oct 27 14:13:27.344: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod pod-projected-secrets-146cef57-ac2f-4283-ac71-a3ca83cc732c container projected-secret-volume-test: +STEP: delete the pod +Oct 27 14:13:27.406: INFO: Waiting for pod pod-projected-secrets-146cef57-ac2f-4283-ac71-a3ca83cc732c to disappear +Oct 27 14:13:27.410: INFO: Pod pod-projected-secrets-146cef57-ac2f-4283-ac71-a3ca83cc732c no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:13:27.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-2405" for this suite. +•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":39,"skipped":604,"failed":0} +SSSSS +------------------------------ +[sig-node] Sysctls [LinuxOnly] [NodeConformance] + should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:36 +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:13:27.424: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename sysctl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sysctl-5907 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:65 +[It] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod with one valid and two invalid sysctls +[AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:13:27.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sysctl-5907" for this suite. +•{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":346,"completed":40,"skipped":609,"failed":0} + +------------------------------ +[sig-storage] ConfigMap + updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:13:27.595: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-1959 +STEP: Waiting for a default service account to be provisioned in namespace +[It] updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name configmap-test-upd-115453c6-111d-4b3f-a600-86936183fbac +STEP: Creating the pod +Oct 27 14:13:27.771: INFO: The status of Pod pod-configmaps-538cba42-3cf7-40e5-b206-6f85caa8f543 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:13:29.777: INFO: The status of Pod pod-configmaps-538cba42-3cf7-40e5-b206-6f85caa8f543 is Running (Ready = true) +STEP: Updating configmap configmap-test-upd-115453c6-111d-4b3f-a600-86936183fbac +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:13:31.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-1959" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":41,"skipped":609,"failed":0} + +------------------------------ +[sig-node] Container Runtime blackbox test on terminated container + should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:13:31.866: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-runtime +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-6118 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the container +STEP: wait for the container to reach Succeeded +STEP: get the container status +STEP: the container should be terminated +STEP: the termination message should be set +Oct 27 14:13:34.050: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- +STEP: delete the container +[AfterEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:13:34.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-runtime-6118" for this suite. +•{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":346,"completed":42,"skipped":609,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for CRD preserving unknown fields at the schema root [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:13:34.076: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-2154 +STEP: Waiting for a default service account to be provisioned in namespace +[It] works for CRD preserving unknown fields at the schema root [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:13:34.229: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: client-side validation (kubectl create and apply) allows request with any unknown properties +Oct 27 14:13:37.954: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-2154 --namespace=crd-publish-openapi-2154 create -f -' +Oct 27 14:13:38.681: INFO: stderr: "" +Oct 27 14:13:38.682: INFO: stdout: "e2e-test-crd-publish-openapi-4125-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" +Oct 27 14:13:38.682: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-2154 --namespace=crd-publish-openapi-2154 delete e2e-test-crd-publish-openapi-4125-crds test-cr' +Oct 27 14:13:38.780: INFO: stderr: "" +Oct 27 14:13:38.780: INFO: stdout: "e2e-test-crd-publish-openapi-4125-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" +Oct 27 14:13:38.780: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-2154 --namespace=crd-publish-openapi-2154 apply -f -' +Oct 27 14:13:39.037: INFO: stderr: "" +Oct 27 14:13:39.037: INFO: stdout: "e2e-test-crd-publish-openapi-4125-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" +Oct 27 14:13:39.037: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-2154 --namespace=crd-publish-openapi-2154 delete e2e-test-crd-publish-openapi-4125-crds test-cr' +Oct 27 14:13:39.118: INFO: stderr: "" +Oct 27 14:13:39.118: INFO: stdout: "e2e-test-crd-publish-openapi-4125-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" +STEP: kubectl explain works to explain CR +Oct 27 14:13:39.118: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-2154 explain e2e-test-crd-publish-openapi-4125-crds' +Oct 27 14:13:39.318: INFO: stderr: "" +Oct 27 14:13:39.318: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-4125-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:13:42.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-2154" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":346,"completed":43,"skipped":620,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-network] DNS + should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:13:42.994: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename dns +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-7518 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-7518.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-7518.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7518.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-7518.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-7518.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7518.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done + +STEP: creating a pod to probe /etc/hosts +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Oct 27 14:13:59.384: INFO: DNS probes using dns-7518/dns-test-5430ca4a-574c-4546-9b6a-1f1c0b347d88 succeeded + +STEP: deleting the pod +[AfterEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:13:59.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-7518" for this suite. +•{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":346,"completed":44,"skipped":635,"failed":0} +S +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:13:59.406: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-2209 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name configmap-test-volume-dd570002-40db-4017-a4c0-38ae63600e01 +STEP: Creating a pod to test consume configMaps +Oct 27 14:13:59.572: INFO: Waiting up to 5m0s for pod "pod-configmaps-17eb830d-3337-40f2-9904-c088e453cca4" in namespace "configmap-2209" to be "Succeeded or Failed" +Oct 27 14:13:59.577: INFO: Pod "pod-configmaps-17eb830d-3337-40f2-9904-c088e453cca4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.916568ms +Oct 27 14:14:01.583: INFO: Pod "pod-configmaps-17eb830d-3337-40f2-9904-c088e453cca4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010516243s +STEP: Saw pod success +Oct 27 14:14:01.583: INFO: Pod "pod-configmaps-17eb830d-3337-40f2-9904-c088e453cca4" satisfied condition "Succeeded or Failed" +Oct 27 14:14:01.587: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod pod-configmaps-17eb830d-3337-40f2-9904-c088e453cca4 container agnhost-container: +STEP: delete the pod +Oct 27 14:14:01.647: INFO: Waiting for pod pod-configmaps-17eb830d-3337-40f2-9904-c088e453cca4 to disappear +Oct 27 14:14:01.651: INFO: Pod pod-configmaps-17eb830d-3337-40f2-9904-c088e453cca4 no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:14:01.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-2209" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":45,"skipped":636,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] CronJob + should not schedule jobs when suspended [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:14:01.665: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename cronjob +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in cronjob-2053 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not schedule jobs when suspended [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a suspended cronjob +STEP: Ensuring no jobs are scheduled +STEP: Ensuring no job exists by listing jobs explicitly +STEP: Removing cronjob +[AfterEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:19:01.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "cronjob-2053" for this suite. + +• [SLOW TEST:300.189 seconds] +[sig-apps] CronJob +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should not schedule jobs when suspended [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","total":346,"completed":46,"skipped":687,"failed":0} +[sig-storage] EmptyDir volumes + should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:19:01.854: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-1722 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0644 on node default medium +Oct 27 14:19:02.025: INFO: Waiting up to 5m0s for pod "pod-7f2254a5-2e38-4a1f-a38c-30d698f266b6" in namespace "emptydir-1722" to be "Succeeded or Failed" +Oct 27 14:19:02.030: INFO: Pod "pod-7f2254a5-2e38-4a1f-a38c-30d698f266b6": Phase="Pending", Reason="", readiness=false. Elapsed: 5.135358ms +Oct 27 14:19:04.035: INFO: Pod "pod-7f2254a5-2e38-4a1f-a38c-30d698f266b6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010824939s +STEP: Saw pod success +Oct 27 14:19:04.036: INFO: Pod "pod-7f2254a5-2e38-4a1f-a38c-30d698f266b6" satisfied condition "Succeeded or Failed" +Oct 27 14:19:04.040: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod pod-7f2254a5-2e38-4a1f-a38c-30d698f266b6 container test-container: +STEP: delete the pod +Oct 27 14:19:04.102: INFO: Waiting for pod pod-7f2254a5-2e38-4a1f-a38c-30d698f266b6 to disappear +Oct 27 14:19:04.107: INFO: Pod pod-7f2254a5-2e38-4a1f-a38c-30d698f266b6 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:19:04.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-1722" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":47,"skipped":687,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should verify ResourceQuota with best effort scope. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:19:04.121: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename resourcequota +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-9956 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should verify ResourceQuota with best effort scope. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a ResourceQuota with best effort scope +STEP: Ensuring ResourceQuota status is calculated +STEP: Creating a ResourceQuota with not best effort scope +STEP: Ensuring ResourceQuota status is calculated +STEP: Creating a best-effort pod +STEP: Ensuring resource quota with best effort scope captures the pod usage +STEP: Ensuring resource quota with not best effort ignored the pod usage +STEP: Deleting the pod +STEP: Ensuring resource quota status released the pod usage +STEP: Creating a not best-effort pod +STEP: Ensuring resource quota with not best effort scope captures the pod usage +STEP: Ensuring resource quota with best effort scope ignored the pod usage +STEP: Deleting the pod +STEP: Ensuring resource quota status released the pod usage +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:19:20.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-9956" for this suite. +•{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":346,"completed":48,"skipped":744,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + should validate Statefulset Status endpoints [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:19:20.431: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename statefulset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-7394 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:92 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:107 +STEP: Creating service test in namespace statefulset-7394 +[It] should validate Statefulset Status endpoints [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating statefulset ss in namespace statefulset-7394 +Oct 27 14:19:20.598: INFO: Found 0 stateful pods, waiting for 1 +Oct 27 14:19:30.605: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: Patch Statefulset to include a label +STEP: Getting /status +Oct 27 14:19:30.631: INFO: StatefulSet ss has Conditions: []v1.StatefulSetCondition(nil) +STEP: updating the StatefulSet Status +Oct 27 14:19:30.645: INFO: updatedStatus.Conditions: []v1.StatefulSetCondition{v1.StatefulSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"E2E", Message:"Set from e2e test"}} +STEP: watching for the statefulset status to be updated +Oct 27 14:19:30.649: INFO: Observed &StatefulSet event: ADDED +Oct 27 14:19:30.649: INFO: Found Statefulset ss in namespace statefulset-7394 with labels: map[e2e:testing] annotations: map[] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} +Oct 27 14:19:30.649: INFO: Statefulset ss has an updated status +STEP: patching the Statefulset Status +Oct 27 14:19:30.649: INFO: Patch payload: {"status":{"conditions":[{"type":"StatusPatched","status":"True"}]}} +Oct 27 14:19:30.667: INFO: Patched status conditions: []v1.StatefulSetCondition{v1.StatefulSetCondition{Type:"StatusPatched", Status:"True", LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"", Message:""}} +STEP: watching for the Statefulset status to be patched +Oct 27 14:19:30.671: INFO: Observed &StatefulSet event: ADDED +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118 +Oct 27 14:19:30.671: INFO: Deleting all statefulset in ns statefulset-7394 +Oct 27 14:19:30.676: INFO: Scaling statefulset ss to 0 +Oct 27 14:19:40.697: INFO: Waiting for statefulset status.replicas updated to 0 +Oct 27 14:19:40.702: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:19:40.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-7394" for this suite. +•{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]","total":346,"completed":49,"skipped":758,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-auth] ServiceAccounts + should guarantee kube-root-ca.crt exist in any namespace [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:19:40.729: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename svcaccounts +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in svcaccounts-5102 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should guarantee kube-root-ca.crt exist in any namespace [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:19:40.887: INFO: Got root ca configmap in namespace "svcaccounts-5102" +Oct 27 14:19:40.893: INFO: Deleted root ca configmap in namespace "svcaccounts-5102" +STEP: waiting for a new root ca configmap created +Oct 27 14:19:41.400: INFO: Recreated root ca configmap in namespace "svcaccounts-5102" +Oct 27 14:19:41.405: INFO: Updated root ca configmap in namespace "svcaccounts-5102" +STEP: waiting for the root ca configmap reconciled +Oct 27 14:19:41.911: INFO: Reconciled root ca configmap in namespace "svcaccounts-5102" +[AfterEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:19:41.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svcaccounts-5102" for this suite. +•{"msg":"PASSED [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]","total":346,"completed":50,"skipped":771,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:19:41.925: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-5252 +STEP: Waiting for a default service account to be provisioned in namespace +[It] updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating projection with configMap that has name projected-configmap-test-upd-dd13659e-4140-4b0d-87fd-b35d15ad177a +STEP: Creating the pod +Oct 27 14:19:42.112: INFO: The status of Pod pod-projected-configmaps-2a572400-55ef-4f40-bf72-6964f1e04d29 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:19:44.119: INFO: The status of Pod pod-projected-configmaps-2a572400-55ef-4f40-bf72-6964f1e04d29 is Running (Ready = true) +STEP: Updating configmap projected-configmap-test-upd-dd13659e-4140-4b0d-87fd-b35d15ad177a +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:21:10.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-5252" for this suite. +•{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":51,"skipped":833,"failed":0} +SSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should be able to deny attaching pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:21:10.816: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-1772 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 27 14:21:11.518: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 27 14:21:14.544: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should be able to deny attaching pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Registering the webhook via the AdmissionRegistration API +STEP: create a pod +STEP: 'kubectl attach' the pod, should be denied by the webhook +Oct 27 14:21:16.619: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=webhook-1772 attach --namespace=webhook-1772 to-be-attached-pod -i -c=container1' +Oct 27 14:21:16.800: INFO: rc: 1 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:21:16.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-1772" for this suite. +STEP: Destroying namespace "webhook-1772-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":346,"completed":52,"skipped":841,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Deployment + deployment should delete old replica sets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:21:16.851: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename deployment +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in deployment-7803 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 +[It] deployment should delete old replica sets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:21:17.007: INFO: Pod name cleanup-pod: Found 0 pods out of 1 +Oct 27 14:21:22.012: INFO: Pod name cleanup-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running +Oct 27 14:21:22.012: INFO: Creating deployment test-cleanup-deployment +STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 +Oct 27 14:21:24.086: INFO: Deployment "test-cleanup-deployment": +&Deployment{ObjectMeta:{test-cleanup-deployment deployment-7803 c8187c5f-43ce-4415-a94b-745d53f0393f 11796 1 2021-10-27 14:21:22 +0000 UTC map[name:cleanup-pod] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2021-10-27 14:21:22 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 14:21:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004f8c108 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-10-27 14:21:22 +0000 UTC,LastTransitionTime:2021-10-27 14:21:22 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-cleanup-deployment-5b4d99b59b" has successfully progressed.,LastUpdateTime:2021-10-27 14:21:23 +0000 UTC,LastTransitionTime:2021-10-27 14:21:22 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} + +Oct 27 14:21:24.091: INFO: New ReplicaSet "test-cleanup-deployment-5b4d99b59b" of Deployment "test-cleanup-deployment": +&ReplicaSet{ObjectMeta:{test-cleanup-deployment-5b4d99b59b deployment-7803 ca8cc261-2a30-43bd-b6ad-07fed766b09d 11786 1 2021-10-27 14:21:22 +0000 UTC map[name:cleanup-pod pod-template-hash:5b4d99b59b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment c8187c5f-43ce-4415-a94b-745d53f0393f 0xc004f8c4e7 0xc004f8c4e8}] [] [{kube-controller-manager Update apps/v1 2021-10-27 14:21:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c8187c5f-43ce-4415-a94b-745d53f0393f\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 14:21:23 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 5b4d99b59b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:5b4d99b59b] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004f8c598 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} +Oct 27 14:21:24.096: INFO: Pod "test-cleanup-deployment-5b4d99b59b-k69ch" is available: +&Pod{ObjectMeta:{test-cleanup-deployment-5b4d99b59b-k69ch test-cleanup-deployment-5b4d99b59b- deployment-7803 7f4dd7f6-669f-4dcd-938c-a0556d814bdb 11785 0 2021-10-27 14:21:22 +0000 UTC map[name:cleanup-pod pod-template-hash:5b4d99b59b] map[cni.projectcalico.org/containerID:d1b30b6f9bd500dac4170d870a053b88209c6e08fd57df656448c60b762fd6eb cni.projectcalico.org/podIP:172.16.1.72/32 cni.projectcalico.org/podIPs:172.16.1.72/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet test-cleanup-deployment-5b4d99b59b ca8cc261-2a30-43bd-b6ad-07fed766b09d 0xc004f8c937 0xc004f8c938}] [] [{calico Update v1 2021-10-27 14:21:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kube-controller-manager Update v1 2021-10-27 14:21:22 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ca8cc261-2a30-43bd-b6ad-07fed766b09d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 14:21:23 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.16.1.72\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-tkmzm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmanu-jzf.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tkmzm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:izgw89f23rpcwrl79tpgp1z,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:21:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:21:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:21:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:21:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.8.35,PodIP:172.16.1.72,StartTime:2021-10-27 14:21:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-27 14:21:22 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:containerd://463ac20058f24d2b5352543875cce2a98483a0ebb4cda81949f97801d4cd59ed,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.16.1.72,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:21:24.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-7803" for this suite. +•{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":346,"completed":53,"skipped":873,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-node] Pods + should get a host IP [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:21:24.119: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename pods +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-9080 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:188 +[It] should get a host IP [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating pod +Oct 27 14:21:24.284: INFO: The status of Pod pod-hostip-4cdb76dc-e39a-4866-b920-13aacafa80b1 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:21:26.290: INFO: The status of Pod pod-hostip-4cdb76dc-e39a-4866-b920-13aacafa80b1 is Running (Ready = true) +Oct 27 14:21:26.300: INFO: Pod pod-hostip-4cdb76dc-e39a-4866-b920-13aacafa80b1 has hostIP: 10.250.8.35 +[AfterEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:21:26.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-9080" for this suite. +•{"msg":"PASSED [sig-node] Pods should get a host IP [NodeConformance] [Conformance]","total":346,"completed":54,"skipped":885,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should list and delete a collection of DaemonSets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:21:26.319: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename daemonsets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in daemonsets-6851 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:142 +[It] should list and delete a collection of DaemonSets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating simple DaemonSet "daemon-set" +STEP: Check that daemon pods launch on every node of the cluster. +Oct 27 14:21:26.505: INFO: Number of nodes with available pods: 0 +Oct 27 14:21:26.505: INFO: Node izgw81stpxs0bun38i01tfz is running more than one daemon pod +Oct 27 14:21:27.520: INFO: Number of nodes with available pods: 1 +Oct 27 14:21:27.520: INFO: Node izgw81stpxs0bun38i01tfz is running more than one daemon pod +Oct 27 14:21:28.518: INFO: Number of nodes with available pods: 2 +Oct 27 14:21:28.519: INFO: Number of running nodes: 2, number of available pods: 2 +STEP: listing all DeamonSets +STEP: DeleteCollection of the DaemonSets +STEP: Verify that ReplicaSets have been deleted +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:108 +Oct 27 14:21:28.547: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"11860"},"items":null} + +Oct 27 14:21:28.552: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"11860"},"items":[{"metadata":{"name":"daemon-set-9t75b","generateName":"daemon-set-","namespace":"daemonsets-6851","uid":"f2776178-d1db-494b-a4e1-8efed2ddc43d","resourceVersion":"11860","creationTimestamp":"2021-10-27T14:21:26Z","deletionTimestamp":"2021-10-27T14:21:58Z","deletionGracePeriodSeconds":30,"labels":{"controller-revision-hash":"577749b6b","daemonset-name":"daemon-set","pod-template-generation":"1"},"annotations":{"cni.projectcalico.org/containerID":"a0aa73ce4ce27b896a99cca0b2c198d4e5bf4fd2f57319fe339cb245d9ad1f20","cni.projectcalico.org/podIP":"172.16.0.42/32","cni.projectcalico.org/podIPs":"172.16.0.42/32","kubernetes.io/psp":"e2e-test-privileged-psp"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"e41b367a-ef90-408d-82e9-27341a8d4864","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"calico","operation":"Update","apiVersion":"v1","time":"2021-10-27T14:21:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}},"subresource":"status"},{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-10-27T14:21:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e41b367a-ef90-408d-82e9-27341a8d4864\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-10-27T14:21:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.16.0.42\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-6clmh","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1","ports":[{"containerPort":9376,"protocol":"TCP"}],"env":[{"name":"KUBERNETES_SERVICE_HOST","value":"api.tmanu-jzf.it.internal.staging.k8s.ondemand.com"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-6clmh","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"izgw81stpxs0bun38i01tfz","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["izgw81stpxs0bun38i01tfz"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-10-27T14:21:26Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-10-27T14:21:28Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-10-27T14:21:28Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-10-27T14:21:26Z"}],"hostIP":"10.250.8.34","podIP":"172.16.0.42","podIPs":[{"ip":"172.16.0.42"}],"startTime":"2021-10-27T14:21:26Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2021-10-27T14:21:27Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1","imageID":"k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50","containerID":"containerd://2f90fd37fec8af6e00eb56d412d9610fa0e8850e70ea5a294d1ac3ce563d6ff1","started":true}],"qosClass":"BestEffort"}},{"metadata":{"name":"daemon-set-rt6lq","generateName":"daemon-set-","namespace":"daemonsets-6851","uid":"c2d14996-91fa-461e-b75a-7ee7bc1fd731","resourceVersion":"11859","creationTimestamp":"2021-10-27T14:21:26Z","deletionTimestamp":"2021-10-27T14:21:58Z","deletionGracePeriodSeconds":30,"labels":{"controller-revision-hash":"577749b6b","daemonset-name":"daemon-set","pod-template-generation":"1"},"annotations":{"cni.projectcalico.org/containerID":"7f60fa9b13546a631934ea0dff34590bdfd4e7c1d8ce00b387b09073878f0c5b","cni.projectcalico.org/podIP":"172.16.1.74/32","cni.projectcalico.org/podIPs":"172.16.1.74/32","kubernetes.io/psp":"e2e-test-privileged-psp"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"e41b367a-ef90-408d-82e9-27341a8d4864","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"calico","operation":"Update","apiVersion":"v1","time":"2021-10-27T14:21:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}},"subresource":"status"},{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-10-27T14:21:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e41b367a-ef90-408d-82e9-27341a8d4864\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-10-27T14:21:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.16.1.74\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-b6qbx","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1","ports":[{"containerPort":9376,"protocol":"TCP"}],"env":[{"name":"KUBERNETES_SERVICE_HOST","value":"api.tmanu-jzf.it.internal.staging.k8s.ondemand.com"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-b6qbx","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"izgw89f23rpcwrl79tpgp1z","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["izgw89f23rpcwrl79tpgp1z"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-10-27T14:21:26Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-10-27T14:21:27Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-10-27T14:21:27Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-10-27T14:21:26Z"}],"hostIP":"10.250.8.35","podIP":"172.16.1.74","podIPs":[{"ip":"172.16.1.74"}],"startTime":"2021-10-27T14:21:26Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2021-10-27T14:21:27Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1","imageID":"k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50","containerID":"containerd://a15df9f334e51d9f7dd3f5eeb1966049e15d26538c504df8ceba1e8de89bd61d","started":true}],"qosClass":"BestEffort"}}]} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:21:28.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-6851" for this suite. +•{"msg":"PASSED [sig-apps] Daemon set [Serial] should list and delete a collection of DaemonSets [Conformance]","total":346,"completed":55,"skipped":899,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-network] Services + should be able to create a functioning NodePort service [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:21:28.577: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-6791 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should be able to create a functioning NodePort service [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating service nodeport-test with type=NodePort in namespace services-6791 +STEP: creating replication controller nodeport-test in namespace services-6791 +I1027 14:21:28.741741 5703 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-6791, replica count: 2 +I1027 14:21:31.793549 5703 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Oct 27 14:21:31.793: INFO: Creating new exec pod +Oct 27 14:21:34.822: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-6791 exec execpodd9p69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' +Oct 27 14:21:35.124: INFO: stderr: "+ nc -v -t -w 2 nodeport-test 80\n+ echo hostName\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n" +Oct 27 14:21:35.124: INFO: stdout: "nodeport-test-bggnp" +Oct 27 14:21:35.124: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-6791 exec execpodd9p69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.28.239.50 80' +Oct 27 14:21:35.395: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.28.239.50 80\nConnection to 172.28.239.50 80 port [tcp/http] succeeded!\n" +Oct 27 14:21:35.396: INFO: stdout: "nodeport-test-bggnp" +Oct 27 14:21:35.396: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-6791 exec execpodd9p69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.250.8.34 32745' +Oct 27 14:21:35.667: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.250.8.34 32745\nConnection to 10.250.8.34 32745 port [tcp/*] succeeded!\n" +Oct 27 14:21:35.667: INFO: stdout: "nodeport-test-w9fcn" +Oct 27 14:21:35.667: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-6791 exec execpodd9p69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.250.8.35 32745' +Oct 27 14:21:35.969: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.250.8.35 32745\nConnection to 10.250.8.35 32745 port [tcp/*] succeeded!\n" +Oct 27 14:21:35.969: INFO: stdout: "" +Oct 27 14:21:36.969: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-6791 exec execpodd9p69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.250.8.35 32745' +Oct 27 14:21:37.224: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.250.8.35 32745\nConnection to 10.250.8.35 32745 port [tcp/*] succeeded!\n" +Oct 27 14:21:37.224: INFO: stdout: "" +Oct 27 14:21:37.970: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-6791 exec execpodd9p69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.250.8.35 32745' +Oct 27 14:21:38.234: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.250.8.35 32745\nConnection to 10.250.8.35 32745 port [tcp/*] succeeded!\n" +Oct 27 14:21:38.234: INFO: stdout: "nodeport-test-w9fcn" +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:21:38.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-6791" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":346,"completed":56,"skipped":909,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for CRD with validation schema [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:21:38.249: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-9191 +STEP: Waiting for a default service account to be provisioned in namespace +[It] works for CRD with validation schema [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:21:38.402: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: client-side validation (kubectl create and apply) allows request with known and required properties +Oct 27 14:21:42.046: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-9191 --namespace=crd-publish-openapi-9191 create -f -' +Oct 27 14:21:42.653: INFO: stderr: "" +Oct 27 14:21:42.654: INFO: stdout: "e2e-test-crd-publish-openapi-5276-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" +Oct 27 14:21:42.654: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-9191 --namespace=crd-publish-openapi-9191 delete e2e-test-crd-publish-openapi-5276-crds test-foo' +Oct 27 14:21:42.747: INFO: stderr: "" +Oct 27 14:21:42.747: INFO: stdout: "e2e-test-crd-publish-openapi-5276-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" +Oct 27 14:21:42.747: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-9191 --namespace=crd-publish-openapi-9191 apply -f -' +Oct 27 14:21:42.969: INFO: stderr: "" +Oct 27 14:21:42.970: INFO: stdout: "e2e-test-crd-publish-openapi-5276-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" +Oct 27 14:21:42.970: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-9191 --namespace=crd-publish-openapi-9191 delete e2e-test-crd-publish-openapi-5276-crds test-foo' +Oct 27 14:21:43.046: INFO: stderr: "" +Oct 27 14:21:43.046: INFO: stdout: "e2e-test-crd-publish-openapi-5276-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" +STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema +Oct 27 14:21:43.046: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-9191 --namespace=crd-publish-openapi-9191 create -f -' +Oct 27 14:21:43.223: INFO: rc: 1 +Oct 27 14:21:43.223: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-9191 --namespace=crd-publish-openapi-9191 apply -f -' +Oct 27 14:21:43.397: INFO: rc: 1 +STEP: client-side validation (kubectl create and apply) rejects request without required properties +Oct 27 14:21:43.397: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-9191 --namespace=crd-publish-openapi-9191 create -f -' +Oct 27 14:21:43.569: INFO: rc: 1 +Oct 27 14:21:43.569: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-9191 --namespace=crd-publish-openapi-9191 apply -f -' +Oct 27 14:21:43.748: INFO: rc: 1 +STEP: kubectl explain works to explain CR properties +Oct 27 14:21:43.748: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-9191 explain e2e-test-crd-publish-openapi-5276-crds' +Oct 27 14:21:43.913: INFO: stderr: "" +Oct 27 14:21:43.913: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-5276-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" +STEP: kubectl explain works to explain CR properties recursively +Oct 27 14:21:43.913: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-9191 explain e2e-test-crd-publish-openapi-5276-crds.metadata' +Oct 27 14:21:44.100: INFO: stderr: "" +Oct 27 14:21:44.100: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-5276-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n NOT return a 409 - instead, it will either return 201 Created or 500 with\n Reason ServerTimeout indicating a unique name could not be found in the\n time allotted, and the client should retry (optionally after the time\n indicated in the Retry-After header).\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only.\n\n DEPRECATED Kubernetes will stop propagating this field in 1.20 release and\n the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" +Oct 27 14:21:44.100: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-9191 explain e2e-test-crd-publish-openapi-5276-crds.spec' +Oct 27 14:21:44.266: INFO: stderr: "" +Oct 27 14:21:44.266: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-5276-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" +Oct 27 14:21:44.266: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-9191 explain e2e-test-crd-publish-openapi-5276-crds.spec.bars' +Oct 27 14:21:44.448: INFO: stderr: "" +Oct 27 14:21:44.448: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-5276-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" +STEP: kubectl explain works to return error when explain is called on property that doesn't exist +Oct 27 14:21:44.449: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-9191 explain e2e-test-crd-publish-openapi-5276-crds.spec.bars2' +Oct 27 14:21:44.622: INFO: rc: 1 +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:21:48.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-9191" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":346,"completed":57,"skipped":949,"failed":0} +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:21:48.191: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-7346 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating secret with name secret-test-map-9ba24b9a-d19b-47b9-bf30-2c97fc722d21 +STEP: Creating a pod to test consume secrets +Oct 27 14:21:48.356: INFO: Waiting up to 5m0s for pod "pod-secrets-0bf800b9-fcb7-41e1-be87-a5c2ac826fa4" in namespace "secrets-7346" to be "Succeeded or Failed" +Oct 27 14:21:48.361: INFO: Pod "pod-secrets-0bf800b9-fcb7-41e1-be87-a5c2ac826fa4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.889232ms +Oct 27 14:21:50.368: INFO: Pod "pod-secrets-0bf800b9-fcb7-41e1-be87-a5c2ac826fa4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.01166331s +STEP: Saw pod success +Oct 27 14:21:50.368: INFO: Pod "pod-secrets-0bf800b9-fcb7-41e1-be87-a5c2ac826fa4" satisfied condition "Succeeded or Failed" +Oct 27 14:21:50.372: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod pod-secrets-0bf800b9-fcb7-41e1-be87-a5c2ac826fa4 container secret-volume-test: +STEP: delete the pod +Oct 27 14:21:50.392: INFO: Waiting for pod pod-secrets-0bf800b9-fcb7-41e1-be87-a5c2ac826fa4 to disappear +Oct 27 14:21:50.398: INFO: Pod pod-secrets-0bf800b9-fcb7-41e1-be87-a5c2ac826fa4 no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:21:50.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-7346" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":346,"completed":58,"skipped":967,"failed":0} + +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:21:50.414: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-33 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating projection with secret that has name projected-secret-test-f4bcb708-d068-44cb-992d-ca958fe457d5 +STEP: Creating a pod to test consume secrets +Oct 27 14:21:50.581: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a1923b04-2344-4ae7-9b12-bde23c63499a" in namespace "projected-33" to be "Succeeded or Failed" +Oct 27 14:21:50.587: INFO: Pod "pod-projected-secrets-a1923b04-2344-4ae7-9b12-bde23c63499a": Phase="Pending", Reason="", readiness=false. Elapsed: 5.604558ms +Oct 27 14:21:52.594: INFO: Pod "pod-projected-secrets-a1923b04-2344-4ae7-9b12-bde23c63499a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012261378s +STEP: Saw pod success +Oct 27 14:21:52.594: INFO: Pod "pod-projected-secrets-a1923b04-2344-4ae7-9b12-bde23c63499a" satisfied condition "Succeeded or Failed" +Oct 27 14:21:52.598: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod pod-projected-secrets-a1923b04-2344-4ae7-9b12-bde23c63499a container projected-secret-volume-test: +STEP: delete the pod +Oct 27 14:21:52.617: INFO: Waiting for pod pod-projected-secrets-a1923b04-2344-4ae7-9b12-bde23c63499a to disappear +Oct 27 14:21:52.621: INFO: Pod pod-projected-secrets-a1923b04-2344-4ae7-9b12-bde23c63499a no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:21:52.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-33" for this suite. +•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":59,"skipped":967,"failed":0} +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should be able to deny custom resource creation, update and deletion [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:21:52.635: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-3365 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 27 14:21:53.218: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 27 14:21:56.244: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should be able to deny custom resource creation, update and deletion [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:21:56.250: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Registering the custom resource webhook via the AdmissionRegistration API +STEP: Creating a custom resource that should be denied by the webhook +STEP: Creating a custom resource whose deletion would be denied by the webhook +STEP: Updating the custom resource with disallowed data should be denied +STEP: Deleting the custom resource should be denied +STEP: Remove the offending key and value from the custom resource data +STEP: Deleting the updated custom resource should be successful +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:21:59.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-3365" for this suite. +STEP: Destroying namespace "webhook-3365-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":346,"completed":60,"skipped":984,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:21:59.638: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-5969 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name configmap-test-volume-map-fb4a0ed9-dee1-43b6-a6e4-9ccc7e8a56fc +STEP: Creating a pod to test consume configMaps +Oct 27 14:21:59.829: INFO: Waiting up to 5m0s for pod "pod-configmaps-c95e7d7f-f93d-4675-8d5e-09122bc29515" in namespace "configmap-5969" to be "Succeeded or Failed" +Oct 27 14:21:59.834: INFO: Pod "pod-configmaps-c95e7d7f-f93d-4675-8d5e-09122bc29515": Phase="Pending", Reason="", readiness=false. Elapsed: 4.780148ms +Oct 27 14:22:01.840: INFO: Pod "pod-configmaps-c95e7d7f-f93d-4675-8d5e-09122bc29515": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010238024s +STEP: Saw pod success +Oct 27 14:22:01.840: INFO: Pod "pod-configmaps-c95e7d7f-f93d-4675-8d5e-09122bc29515" satisfied condition "Succeeded or Failed" +Oct 27 14:22:01.844: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod pod-configmaps-c95e7d7f-f93d-4675-8d5e-09122bc29515 container agnhost-container: +STEP: delete the pod +Oct 27 14:22:01.906: INFO: Waiting for pod pod-configmaps-c95e7d7f-f93d-4675-8d5e-09122bc29515 to disappear +Oct 27 14:22:01.911: INFO: Pod pod-configmaps-c95e7d7f-f93d-4675-8d5e-09122bc29515 no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:22:01.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-5969" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":61,"skipped":1084,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for multiple CRDs of different groups [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:22:01.925: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-8923 +STEP: Waiting for a default service account to be provisioned in namespace +[It] works for multiple CRDs of different groups [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation +Oct 27 14:22:02.100: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 14:22:05.663: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:22:20.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-8923" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":346,"completed":62,"skipped":1135,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-network] DNS + should provide DNS for pods for Hostname [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:22:20.247: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename dns +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-3029 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a test headless service +STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-3029.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-3029.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3029.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-3029.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-3029.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3029.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done + +STEP: creating a pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Oct 27 14:22:22.677: INFO: DNS probes using dns-3029/dns-test-693b60e7-99fc-4aac-9e12-b6e28aab73c1 succeeded + +STEP: deleting the pod +STEP: deleting the test headless service +[AfterEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:22:22.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-3029" for this suite. +•{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":346,"completed":63,"skipped":1145,"failed":0} +SSSSSSSSS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with downward pod [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:22:22.713: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename subpath +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in subpath-8484 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] Atomic writer volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 +STEP: Setting up data +[It] should support subpaths with downward pod [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating pod pod-subpath-test-downwardapi-cmr6 +STEP: Creating a pod to test atomic-volume-subpath +Oct 27 14:22:22.963: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-cmr6" in namespace "subpath-8484" to be "Succeeded or Failed" +Oct 27 14:22:22.968: INFO: Pod "pod-subpath-test-downwardapi-cmr6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.623248ms +Oct 27 14:22:24.974: INFO: Pod "pod-subpath-test-downwardapi-cmr6": Phase="Running", Reason="", readiness=true. Elapsed: 2.010543754s +Oct 27 14:22:26.981: INFO: Pod "pod-subpath-test-downwardapi-cmr6": Phase="Running", Reason="", readiness=true. Elapsed: 4.017093655s +Oct 27 14:22:28.995: INFO: Pod "pod-subpath-test-downwardapi-cmr6": Phase="Running", Reason="", readiness=true. Elapsed: 6.031655798s +Oct 27 14:22:31.003: INFO: Pod "pod-subpath-test-downwardapi-cmr6": Phase="Running", Reason="", readiness=true. Elapsed: 8.039250198s +Oct 27 14:22:33.010: INFO: Pod "pod-subpath-test-downwardapi-cmr6": Phase="Running", Reason="", readiness=true. Elapsed: 10.046393231s +Oct 27 14:22:35.015: INFO: Pod "pod-subpath-test-downwardapi-cmr6": Phase="Running", Reason="", readiness=true. Elapsed: 12.051592937s +Oct 27 14:22:37.021: INFO: Pod "pod-subpath-test-downwardapi-cmr6": Phase="Running", Reason="", readiness=true. Elapsed: 14.057654264s +Oct 27 14:22:39.027: INFO: Pod "pod-subpath-test-downwardapi-cmr6": Phase="Running", Reason="", readiness=true. Elapsed: 16.063504338s +Oct 27 14:22:41.034: INFO: Pod "pod-subpath-test-downwardapi-cmr6": Phase="Running", Reason="", readiness=true. Elapsed: 18.070791716s +Oct 27 14:22:43.040: INFO: Pod "pod-subpath-test-downwardapi-cmr6": Phase="Running", Reason="", readiness=true. Elapsed: 20.076707699s +Oct 27 14:22:45.046: INFO: Pod "pod-subpath-test-downwardapi-cmr6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.08229903s +STEP: Saw pod success +Oct 27 14:22:45.046: INFO: Pod "pod-subpath-test-downwardapi-cmr6" satisfied condition "Succeeded or Failed" +Oct 27 14:22:45.050: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod pod-subpath-test-downwardapi-cmr6 container test-container-subpath-downwardapi-cmr6: +STEP: delete the pod +Oct 27 14:22:45.070: INFO: Waiting for pod pod-subpath-test-downwardapi-cmr6 to disappear +Oct 27 14:22:45.074: INFO: Pod pod-subpath-test-downwardapi-cmr6 no longer exists +STEP: Deleting pod pod-subpath-test-downwardapi-cmr6 +Oct 27 14:22:45.074: INFO: Deleting pod "pod-subpath-test-downwardapi-cmr6" in namespace "subpath-8484" +[AfterEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:22:45.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "subpath-8484" for this suite. +•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":346,"completed":64,"skipped":1154,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:22:45.092: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-8346 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 27 14:22:45.250: INFO: Waiting up to 5m0s for pod "downwardapi-volume-930a55c6-d68c-4c43-ad5c-d648c767fb05" in namespace "projected-8346" to be "Succeeded or Failed" +Oct 27 14:22:45.255: INFO: Pod "downwardapi-volume-930a55c6-d68c-4c43-ad5c-d648c767fb05": Phase="Pending", Reason="", readiness=false. Elapsed: 4.482316ms +Oct 27 14:22:47.260: INFO: Pod "downwardapi-volume-930a55c6-d68c-4c43-ad5c-d648c767fb05": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010050445s +STEP: Saw pod success +Oct 27 14:22:47.260: INFO: Pod "downwardapi-volume-930a55c6-d68c-4c43-ad5c-d648c767fb05" satisfied condition "Succeeded or Failed" +Oct 27 14:22:47.265: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod downwardapi-volume-930a55c6-d68c-4c43-ad5c-d648c767fb05 container client-container: +STEP: delete the pod +Oct 27 14:22:47.283: INFO: Waiting for pod downwardapi-volume-930a55c6-d68c-4c43-ad5c-d648c767fb05 to disappear +Oct 27 14:22:47.287: INFO: Pod downwardapi-volume-930a55c6-d68c-4c43-ad5c-d648c767fb05 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:22:47.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-8346" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":65,"skipped":1164,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should be able to change the type from ExternalName to NodePort [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:22:47.300: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-1037 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should be able to change the type from ExternalName to NodePort [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a service externalname-service with the type=ExternalName in namespace services-1037 +STEP: changing the ExternalName service to type=NodePort +STEP: creating replication controller externalname-service in namespace services-1037 +I1027 14:22:47.478029 5703 runners.go:190] Created replication controller with name: externalname-service, namespace: services-1037, replica count: 2 +Oct 27 14:22:50.529: INFO: Creating new exec pod +I1027 14:22:50.529537 5703 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Oct 27 14:22:53.558: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-1037 exec execpodj922p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' +Oct 27 14:22:53.828: INFO: stderr: "+ + echonc -v -t -w 2 externalname-service 80\n hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" +Oct 27 14:22:53.828: INFO: stdout: "" +Oct 27 14:22:54.829: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-1037 exec execpodj922p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' +Oct 27 14:22:55.098: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" +Oct 27 14:22:55.098: INFO: stdout: "externalname-service-k6zcx" +Oct 27 14:22:55.098: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-1037 exec execpodj922p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.28.46.217 80' +Oct 27 14:22:55.365: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.28.46.217 80\nConnection to 172.28.46.217 80 port [tcp/http] succeeded!\n" +Oct 27 14:22:55.365: INFO: stdout: "" +Oct 27 14:22:56.365: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-1037 exec execpodj922p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.28.46.217 80' +Oct 27 14:22:56.635: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.28.46.217 80\nConnection to 172.28.46.217 80 port [tcp/http] succeeded!\n" +Oct 27 14:22:56.635: INFO: stdout: "" +Oct 27 14:22:57.366: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-1037 exec execpodj922p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.28.46.217 80' +Oct 27 14:22:57.637: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.28.46.217 80\nConnection to 172.28.46.217 80 port [tcp/http] succeeded!\n" +Oct 27 14:22:57.637: INFO: stdout: "" +Oct 27 14:22:58.365: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-1037 exec execpodj922p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.28.46.217 80' +Oct 27 14:22:58.688: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.28.46.217 80\nConnection to 172.28.46.217 80 port [tcp/http] succeeded!\n" +Oct 27 14:22:58.688: INFO: stdout: "externalname-service-zpdbk" +Oct 27 14:22:58.688: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-1037 exec execpodj922p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.250.8.34 30124' +Oct 27 14:22:58.964: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.250.8.34 30124\nConnection to 10.250.8.34 30124 port [tcp/*] succeeded!\n" +Oct 27 14:22:58.964: INFO: stdout: "" +Oct 27 14:22:59.964: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-1037 exec execpodj922p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.250.8.34 30124' +Oct 27 14:23:00.232: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.250.8.34 30124\nConnection to 10.250.8.34 30124 port [tcp/*] succeeded!\n" +Oct 27 14:23:00.232: INFO: stdout: "externalname-service-k6zcx" +Oct 27 14:23:00.232: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-1037 exec execpodj922p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.250.8.35 30124' +Oct 27 14:23:00.544: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.250.8.35 30124\nConnection to 10.250.8.35 30124 port [tcp/*] succeeded!\n" +Oct 27 14:23:00.544: INFO: stdout: "" +Oct 27 14:23:01.544: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-1037 exec execpodj922p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.250.8.35 30124' +Oct 27 14:23:01.854: INFO: stderr: "+ + echonc hostName -v\n -t -w 2 10.250.8.35 30124\nConnection to 10.250.8.35 30124 port [tcp/*] succeeded!\n" +Oct 27 14:23:01.855: INFO: stdout: "externalname-service-zpdbk" +Oct 27 14:23:01.855: INFO: Cleaning up the ExternalName to NodePort test service +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:23:01.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-1037" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":346,"completed":66,"skipped":1190,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:23:01.883: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-3484 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name configmap-test-volume-79af10f5-a663-42be-8bb7-392d694612ac +STEP: Creating a pod to test consume configMaps +Oct 27 14:23:02.056: INFO: Waiting up to 5m0s for pod "pod-configmaps-807c4338-d7eb-4585-9778-818d48b2e0a0" in namespace "configmap-3484" to be "Succeeded or Failed" +Oct 27 14:23:02.061: INFO: Pod "pod-configmaps-807c4338-d7eb-4585-9778-818d48b2e0a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.873449ms +Oct 27 14:23:04.067: INFO: Pod "pod-configmaps-807c4338-d7eb-4585-9778-818d48b2e0a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010363957s +STEP: Saw pod success +Oct 27 14:23:04.067: INFO: Pod "pod-configmaps-807c4338-d7eb-4585-9778-818d48b2e0a0" satisfied condition "Succeeded or Failed" +Oct 27 14:23:04.072: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod pod-configmaps-807c4338-d7eb-4585-9778-818d48b2e0a0 container configmap-volume-test: +STEP: delete the pod +Oct 27 14:23:04.138: INFO: Waiting for pod pod-configmaps-807c4338-d7eb-4585-9778-818d48b2e0a0 to disappear +Oct 27 14:23:04.142: INFO: Pod pod-configmaps-807c4338-d7eb-4585-9778-818d48b2e0a0 no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:23:04.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-3484" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":346,"completed":67,"skipped":1237,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] CronJob + should support CronJob API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:23:04.155: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename cronjob +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in cronjob-6234 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support CronJob API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a cronjob +STEP: creating +STEP: getting +STEP: listing +STEP: watching +Oct 27 14:23:04.320: INFO: starting watch +STEP: cluster-wide listing +STEP: cluster-wide watching +Oct 27 14:23:04.328: INFO: starting watch +STEP: patching +STEP: updating +Oct 27 14:23:04.352: INFO: waiting for watch events with expected annotations +Oct 27 14:23:04.352: INFO: saw patched and updated annotations +STEP: patching /status +STEP: updating /status +STEP: get /status +STEP: deleting +STEP: deleting a collection +[AfterEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:23:04.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "cronjob-6234" for this suite. +•{"msg":"PASSED [sig-apps] CronJob should support CronJob API operations [Conformance]","total":346,"completed":68,"skipped":1298,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] DNS + should provide DNS for pods for Subdomain [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:23:04.410: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename dns +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-9037 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide DNS for pods for Subdomain [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a test headless service +STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9037.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-9037.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9037.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9037.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-9037.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-9037.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-9037.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-9037.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9037.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9037.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-9037.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9037.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-9037.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-9037.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-9037.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-9037.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-9037.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9037.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done + +STEP: creating a pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Oct 27 14:23:06.653: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9037.svc.cluster.local from pod dns-9037/dns-test-233439da-ceba-47e8-b66e-43a155358dee: the server could not find the requested resource (get pods dns-test-233439da-ceba-47e8-b66e-43a155358dee) +Oct 27 14:23:06.715: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9037.svc.cluster.local from pod dns-9037/dns-test-233439da-ceba-47e8-b66e-43a155358dee: the server could not find the requested resource (get pods dns-test-233439da-ceba-47e8-b66e-43a155358dee) +Oct 27 14:23:06.773: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9037.svc.cluster.local from pod dns-9037/dns-test-233439da-ceba-47e8-b66e-43a155358dee: the server could not find the requested resource (get pods dns-test-233439da-ceba-47e8-b66e-43a155358dee) +Oct 27 14:23:06.799: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9037.svc.cluster.local from pod dns-9037/dns-test-233439da-ceba-47e8-b66e-43a155358dee: the server could not find the requested resource (get pods dns-test-233439da-ceba-47e8-b66e-43a155358dee) +Oct 27 14:23:06.811: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9037.svc.cluster.local from pod dns-9037/dns-test-233439da-ceba-47e8-b66e-43a155358dee: the server could not find the requested resource (get pods dns-test-233439da-ceba-47e8-b66e-43a155358dee) +Oct 27 14:23:06.819: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9037.svc.cluster.local from pod dns-9037/dns-test-233439da-ceba-47e8-b66e-43a155358dee: the server could not find the requested resource (get pods dns-test-233439da-ceba-47e8-b66e-43a155358dee) +Oct 27 14:23:06.827: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9037.svc.cluster.local from pod dns-9037/dns-test-233439da-ceba-47e8-b66e-43a155358dee: the server could not find the requested resource (get pods dns-test-233439da-ceba-47e8-b66e-43a155358dee) +Oct 27 14:23:06.841: INFO: Lookups using dns-9037/dns-test-233439da-ceba-47e8-b66e-43a155358dee failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9037.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9037.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9037.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9037.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9037.svc.cluster.local jessie_udp@dns-test-service-2.dns-9037.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9037.svc.cluster.local] + +Oct 27 14:23:11.907: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9037.svc.cluster.local from pod dns-9037/dns-test-233439da-ceba-47e8-b66e-43a155358dee: the server could not find the requested resource (get pods dns-test-233439da-ceba-47e8-b66e-43a155358dee) +Oct 27 14:23:11.914: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9037.svc.cluster.local from pod dns-9037/dns-test-233439da-ceba-47e8-b66e-43a155358dee: the server could not find the requested resource (get pods dns-test-233439da-ceba-47e8-b66e-43a155358dee) +Oct 27 14:23:11.954: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9037.svc.cluster.local from pod dns-9037/dns-test-233439da-ceba-47e8-b66e-43a155358dee: the server could not find the requested resource (get pods dns-test-233439da-ceba-47e8-b66e-43a155358dee) +Oct 27 14:23:11.962: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9037.svc.cluster.local from pod dns-9037/dns-test-233439da-ceba-47e8-b66e-43a155358dee: the server could not find the requested resource (get pods dns-test-233439da-ceba-47e8-b66e-43a155358dee) +Oct 27 14:23:11.983: INFO: Lookups using dns-9037/dns-test-233439da-ceba-47e8-b66e-43a155358dee failed for: [wheezy_udp@dns-test-service-2.dns-9037.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9037.svc.cluster.local jessie_udp@dns-test-service-2.dns-9037.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9037.svc.cluster.local] + +Oct 27 14:23:16.907: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9037.svc.cluster.local from pod dns-9037/dns-test-233439da-ceba-47e8-b66e-43a155358dee: the server could not find the requested resource (get pods dns-test-233439da-ceba-47e8-b66e-43a155358dee) +Oct 27 14:23:16.915: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9037.svc.cluster.local from pod dns-9037/dns-test-233439da-ceba-47e8-b66e-43a155358dee: the server could not find the requested resource (get pods dns-test-233439da-ceba-47e8-b66e-43a155358dee) +Oct 27 14:23:16.993: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9037.svc.cluster.local from pod dns-9037/dns-test-233439da-ceba-47e8-b66e-43a155358dee: the server could not find the requested resource (get pods dns-test-233439da-ceba-47e8-b66e-43a155358dee) +Oct 27 14:23:17.001: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9037.svc.cluster.local from pod dns-9037/dns-test-233439da-ceba-47e8-b66e-43a155358dee: the server could not find the requested resource (get pods dns-test-233439da-ceba-47e8-b66e-43a155358dee) +Oct 27 14:23:17.016: INFO: Lookups using dns-9037/dns-test-233439da-ceba-47e8-b66e-43a155358dee failed for: [wheezy_udp@dns-test-service-2.dns-9037.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9037.svc.cluster.local jessie_udp@dns-test-service-2.dns-9037.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9037.svc.cluster.local] + +Oct 27 14:23:21.906: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9037.svc.cluster.local from pod dns-9037/dns-test-233439da-ceba-47e8-b66e-43a155358dee: the server could not find the requested resource (get pods dns-test-233439da-ceba-47e8-b66e-43a155358dee) +Oct 27 14:23:21.915: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9037.svc.cluster.local from pod dns-9037/dns-test-233439da-ceba-47e8-b66e-43a155358dee: the server could not find the requested resource (get pods dns-test-233439da-ceba-47e8-b66e-43a155358dee) +Oct 27 14:23:21.953: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9037.svc.cluster.local from pod dns-9037/dns-test-233439da-ceba-47e8-b66e-43a155358dee: the server could not find the requested resource (get pods dns-test-233439da-ceba-47e8-b66e-43a155358dee) +Oct 27 14:23:21.962: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9037.svc.cluster.local from pod dns-9037/dns-test-233439da-ceba-47e8-b66e-43a155358dee: the server could not find the requested resource (get pods dns-test-233439da-ceba-47e8-b66e-43a155358dee) +Oct 27 14:23:21.978: INFO: Lookups using dns-9037/dns-test-233439da-ceba-47e8-b66e-43a155358dee failed for: [wheezy_udp@dns-test-service-2.dns-9037.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9037.svc.cluster.local jessie_udp@dns-test-service-2.dns-9037.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9037.svc.cluster.local] + +Oct 27 14:23:26.906: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9037.svc.cluster.local from pod dns-9037/dns-test-233439da-ceba-47e8-b66e-43a155358dee: the server could not find the requested resource (get pods dns-test-233439da-ceba-47e8-b66e-43a155358dee) +Oct 27 14:23:26.914: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9037.svc.cluster.local from pod dns-9037/dns-test-233439da-ceba-47e8-b66e-43a155358dee: the server could not find the requested resource (get pods dns-test-233439da-ceba-47e8-b66e-43a155358dee) +Oct 27 14:23:26.994: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9037.svc.cluster.local from pod dns-9037/dns-test-233439da-ceba-47e8-b66e-43a155358dee: the server could not find the requested resource (get pods dns-test-233439da-ceba-47e8-b66e-43a155358dee) +Oct 27 14:23:27.002: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9037.svc.cluster.local from pod dns-9037/dns-test-233439da-ceba-47e8-b66e-43a155358dee: the server could not find the requested resource (get pods dns-test-233439da-ceba-47e8-b66e-43a155358dee) +Oct 27 14:23:27.018: INFO: Lookups using dns-9037/dns-test-233439da-ceba-47e8-b66e-43a155358dee failed for: [wheezy_udp@dns-test-service-2.dns-9037.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9037.svc.cluster.local jessie_udp@dns-test-service-2.dns-9037.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9037.svc.cluster.local] + +Oct 27 14:23:31.906: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9037.svc.cluster.local from pod dns-9037/dns-test-233439da-ceba-47e8-b66e-43a155358dee: the server could not find the requested resource (get pods dns-test-233439da-ceba-47e8-b66e-43a155358dee) +Oct 27 14:23:31.914: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9037.svc.cluster.local from pod dns-9037/dns-test-233439da-ceba-47e8-b66e-43a155358dee: the server could not find the requested resource (get pods dns-test-233439da-ceba-47e8-b66e-43a155358dee) +Oct 27 14:23:31.957: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9037.svc.cluster.local from pod dns-9037/dns-test-233439da-ceba-47e8-b66e-43a155358dee: the server could not find the requested resource (get pods dns-test-233439da-ceba-47e8-b66e-43a155358dee) +Oct 27 14:23:31.965: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9037.svc.cluster.local from pod dns-9037/dns-test-233439da-ceba-47e8-b66e-43a155358dee: the server could not find the requested resource (get pods dns-test-233439da-ceba-47e8-b66e-43a155358dee) +Oct 27 14:23:31.980: INFO: Lookups using dns-9037/dns-test-233439da-ceba-47e8-b66e-43a155358dee failed for: [wheezy_udp@dns-test-service-2.dns-9037.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9037.svc.cluster.local jessie_udp@dns-test-service-2.dns-9037.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9037.svc.cluster.local] + +Oct 27 14:23:37.015: INFO: DNS probes using dns-9037/dns-test-233439da-ceba-47e8-b66e-43a155358dee succeeded + +STEP: deleting the pod +STEP: deleting the test headless service +[AfterEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:23:37.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-9037" for this suite. +•{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":346,"completed":69,"skipped":1329,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-node] Pods Extended Pods Set QOS Class + should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Pods Extended + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:23:37.044: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename pods +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-1870 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] Pods Set QOS Class + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:149 +[It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pod +STEP: submitting the pod to kubernetes +STEP: verifying QOS class is set on the pod +[AfterEach] [sig-node] Pods Extended + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:23:37.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-1870" for this suite. +•{"msg":"PASSED [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":346,"completed":70,"skipped":1339,"failed":0} +SSSSS +------------------------------ +[sig-network] Services + should be able to change the type from ClusterIP to ExternalName [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:23:37.220: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-2749 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should be able to change the type from ClusterIP to ExternalName [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-2749 +STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service +STEP: creating service externalsvc in namespace services-2749 +STEP: creating replication controller externalsvc in namespace services-2749 +I1027 14:23:37.388316 5703 runners.go:190] Created replication controller with name: externalsvc, namespace: services-2749, replica count: 2 +I1027 14:23:40.439382 5703 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +STEP: changing the ClusterIP service to type=ExternalName +Oct 27 14:23:40.462: INFO: Creating new exec pod +Oct 27 14:23:42.482: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-2749 exec execpodmthjw -- /bin/sh -x -c nslookup clusterip-service.services-2749.svc.cluster.local' +Oct 27 14:23:42.765: INFO: stderr: "+ nslookup clusterip-service.services-2749.svc.cluster.local\n" +Oct 27 14:23:42.765: INFO: stdout: "Server:\t\t172.24.0.10\nAddress:\t172.24.0.10#53\n\nclusterip-service.services-2749.svc.cluster.local\tcanonical name = externalsvc.services-2749.svc.cluster.local.\nName:\texternalsvc.services-2749.svc.cluster.local\nAddress: 172.31.124.76\n\n" +STEP: deleting ReplicationController externalsvc in namespace services-2749, will wait for the garbage collector to delete the pods +Oct 27 14:23:42.828: INFO: Deleting ReplicationController externalsvc took: 6.591376ms +Oct 27 14:23:42.929: INFO: Terminating ReplicationController externalsvc pods took: 101.307349ms +Oct 27 14:23:44.941: INFO: Cleaning up the ClusterIP to ExternalName test service +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:23:44.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-2749" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":346,"completed":71,"skipped":1344,"failed":0} +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should deny crd creation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:23:44.961: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-1403 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 27 14:23:45.479: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 27 14:23:48.506: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should deny crd creation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Registering the crd webhook via the AdmissionRegistration API +STEP: Creating a custom resource definition that should be denied by the webhook +Oct 27 14:23:48.612: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:23:48.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-1403" for this suite. +STEP: Destroying namespace "webhook-1403-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":346,"completed":72,"skipped":1365,"failed":0} +SS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a configMap. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:23:48.737: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename resourcequota +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-6286 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should create a ResourceQuota and capture the life of a configMap. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Counting existing ResourceQuota +STEP: Creating a ResourceQuota +STEP: Ensuring resource quota status is calculated +STEP: Creating a ConfigMap +STEP: Ensuring resource quota status captures configMap creation +STEP: Deleting a ConfigMap +STEP: Ensuring resource quota status released usage +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:24:16.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-6286" for this suite. +•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":346,"completed":73,"skipped":1367,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Namespaces [Serial] + should ensure that all services are removed when a namespace is deleted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:24:16.949: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename namespaces +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in namespaces-3347 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should ensure that all services are removed when a namespace is deleted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a test namespace +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nsdeletetest-9476 +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Creating a service in the namespace +STEP: Deleting the namespace +STEP: Waiting for the namespace to be removed. +STEP: Recreating the namespace +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nsdeletetest-8892 +STEP: Verifying there is no service in the namespace +[AfterEach] [sig-api-machinery] Namespaces [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:24:23.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "namespaces-3347" for this suite. +STEP: Destroying namespace "nsdeletetest-9476" for this suite. +Oct 27 14:24:23.433: INFO: Namespace nsdeletetest-9476 was already deleted +STEP: Destroying namespace "nsdeletetest-8892" for this suite. +•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":346,"completed":74,"skipped":1399,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + listing validating webhooks should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:24:23.439: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-9132 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 27 14:24:24.547: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 27 14:24:27.574: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] listing validating webhooks should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Listing all of the created validation webhooks +STEP: Creating a configMap that does not comply to the validation webhook rules +STEP: Deleting the collection of validation webhooks +STEP: Creating a configMap that does not comply to the validation webhook rules +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:24:27.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-9132" for this suite. +STEP: Destroying namespace "webhook-9132-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":346,"completed":75,"skipped":1413,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] IngressClass API + should support creating IngressClass API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] IngressClass API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:24:27.861: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename ingressclass +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in ingressclass-6279 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] IngressClass API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingressclass.go:148 +[It] should support creating IngressClass API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: getting /apis +STEP: getting /apis/networking.k8s.io +STEP: getting /apis/networking.k8s.iov1 +STEP: creating +STEP: getting +STEP: listing +STEP: watching +Oct 27 14:24:28.044: INFO: starting watch +STEP: patching +STEP: updating +Oct 27 14:24:28.059: INFO: waiting for watch events with expected annotations +Oct 27 14:24:28.059: INFO: saw patched and updated annotations +STEP: deleting +STEP: deleting a collection +[AfterEach] [sig-network] IngressClass API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:24:28.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "ingressclass-6279" for this suite. +•{"msg":"PASSED [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]","total":346,"completed":76,"skipped":1437,"failed":0} +SSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:24:28.095: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-5341 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name configmap-test-volume-67565204-6bf3-4452-ace3-507eac06099a +STEP: Creating a pod to test consume configMaps +Oct 27 14:24:28.258: INFO: Waiting up to 5m0s for pod "pod-configmaps-89a365cb-86a1-4df9-b85f-52afbb82b383" in namespace "configmap-5341" to be "Succeeded or Failed" +Oct 27 14:24:28.263: INFO: Pod "pod-configmaps-89a365cb-86a1-4df9-b85f-52afbb82b383": Phase="Pending", Reason="", readiness=false. Elapsed: 4.423365ms +Oct 27 14:24:30.269: INFO: Pod "pod-configmaps-89a365cb-86a1-4df9-b85f-52afbb82b383": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011054828s +STEP: Saw pod success +Oct 27 14:24:30.269: INFO: Pod "pod-configmaps-89a365cb-86a1-4df9-b85f-52afbb82b383" satisfied condition "Succeeded or Failed" +Oct 27 14:24:30.274: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod pod-configmaps-89a365cb-86a1-4df9-b85f-52afbb82b383 container agnhost-container: +STEP: delete the pod +Oct 27 14:24:30.297: INFO: Waiting for pod pod-configmaps-89a365cb-86a1-4df9-b85f-52afbb82b383 to disappear +Oct 27 14:24:30.302: INFO: Pod pod-configmaps-89a365cb-86a1-4df9-b85f-52afbb82b383 no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:24:30.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-5341" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":346,"completed":77,"skipped":1445,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should mutate custom resource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:24:30.315: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-8084 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 27 14:24:30.738: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 27 14:24:33.762: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should mutate custom resource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:24:33.767: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Registering the mutating webhook for custom resource e2e-test-webhook-3925-crds.webhook.example.com via the AdmissionRegistration API +STEP: Creating a custom resource that should be mutated by the webhook +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:24:37.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-8084" for this suite. +STEP: Destroying namespace "webhook-8084-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":346,"completed":78,"skipped":1471,"failed":0} +SSSSSSSSS +------------------------------ +[sig-node] InitContainer [NodeConformance] + should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:24:37.098: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename init-container +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in init-container-4031 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 +[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pod +Oct 27 14:24:37.247: INFO: PodSpec: initContainers in spec.initContainers +[AfterEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:24:40.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "init-container-4031" for this suite. +•{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":346,"completed":79,"skipped":1480,"failed":0} +SSS +------------------------------ +[sig-api-machinery] Watchers + should observe an object deletion if it stops meeting the requirements of the selector [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:24:40.025: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename watch +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in watch-9683 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a watch on configmaps with a certain label +STEP: creating a new configmap +STEP: modifying the configmap once +STEP: changing the label value of the configmap +STEP: Expecting to observe a delete notification for the watched object +Oct 27 14:24:40.206: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9683 b8e3748e-0a70-4fd2-9e18-0e277e25d167 13719 0 2021-10-27 14:24:40 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-10-27 14:24:40 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 27 14:24:40.206: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9683 b8e3748e-0a70-4fd2-9e18-0e277e25d167 13720 0 2021-10-27 14:24:40 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-10-27 14:24:40 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 27 14:24:40.207: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9683 b8e3748e-0a70-4fd2-9e18-0e277e25d167 13721 0 2021-10-27 14:24:40 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-10-27 14:24:40 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: modifying the configmap a second time +STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements +STEP: changing the label value of the configmap back +STEP: modifying the configmap a third time +STEP: deleting the configmap +STEP: Expecting to observe an add notification for the watched object when the label value was restored +Oct 27 14:24:50.241: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9683 b8e3748e-0a70-4fd2-9e18-0e277e25d167 13787 0 2021-10-27 14:24:40 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-10-27 14:24:40 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 27 14:24:50.241: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9683 b8e3748e-0a70-4fd2-9e18-0e277e25d167 13788 0 2021-10-27 14:24:40 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-10-27 14:24:40 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 27 14:24:50.241: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9683 b8e3748e-0a70-4fd2-9e18-0e277e25d167 13789 0 2021-10-27 14:24:40 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-10-27 14:24:40 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} +[AfterEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:24:50.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "watch-9683" for this suite. +•{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":346,"completed":80,"skipped":1483,"failed":0} +SS +------------------------------ +[sig-network] Ingress API + should support creating Ingress API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Ingress API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:24:50.254: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename ingress +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in ingress-1039 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support creating Ingress API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: getting /apis +STEP: getting /apis/networking.k8s.io +STEP: getting /apis/networking.k8s.iov1 +STEP: creating +STEP: getting +STEP: listing +STEP: watching +Oct 27 14:24:50.447: INFO: starting watch +STEP: cluster-wide listing +STEP: cluster-wide watching +Oct 27 14:24:50.455: INFO: starting watch +STEP: patching +STEP: updating +Oct 27 14:24:50.476: INFO: waiting for watch events with expected annotations +Oct 27 14:24:50.476: INFO: saw patched and updated annotations +STEP: patching /status +STEP: updating /status +STEP: get /status +STEP: deleting +STEP: deleting a collection +[AfterEach] [sig-network] Ingress API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:24:50.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "ingress-1039" for this suite. +•{"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":346,"completed":81,"skipped":1485,"failed":0} +SSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:24:50.533: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-9529 +STEP: Waiting for a default service account to be provisioned in namespace +[It] optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name cm-test-opt-del-bde1714e-d191-4aa5-ae6b-676b4343d53a +STEP: Creating configMap with name cm-test-opt-upd-d45a4b3c-a2a5-45ec-96d2-6e9b8ad008ea +STEP: Creating the pod +Oct 27 14:24:50.714: INFO: The status of Pod pod-configmaps-c3a59fcd-bc51-47e3-ae2b-eb5c94684a2f is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:24:52.720: INFO: The status of Pod pod-configmaps-c3a59fcd-bc51-47e3-ae2b-eb5c94684a2f is Running (Ready = true) +STEP: Deleting configmap cm-test-opt-del-bde1714e-d191-4aa5-ae6b-676b4343d53a +STEP: Updating configmap cm-test-opt-upd-d45a4b3c-a2a5-45ec-96d2-6e9b8ad008ea +STEP: Creating configMap with name cm-test-opt-create-26e39043-a856-4d99-aff5-709a61c2e3d4 +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:24:54.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-9529" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":82,"skipped":1494,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:24:54.907: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-6300 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name projected-configmap-test-volume-map-88e5d7b7-ee10-45c8-bd47-60fe02e19b0b +STEP: Creating a pod to test consume configMaps +Oct 27 14:24:55.073: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-312acce2-d611-405f-8c43-0f0696152626" in namespace "projected-6300" to be "Succeeded or Failed" +Oct 27 14:24:55.077: INFO: Pod "pod-projected-configmaps-312acce2-d611-405f-8c43-0f0696152626": Phase="Pending", Reason="", readiness=false. Elapsed: 4.824108ms +Oct 27 14:24:57.083: INFO: Pod "pod-projected-configmaps-312acce2-d611-405f-8c43-0f0696152626": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010798057s +STEP: Saw pod success +Oct 27 14:24:57.083: INFO: Pod "pod-projected-configmaps-312acce2-d611-405f-8c43-0f0696152626" satisfied condition "Succeeded or Failed" +Oct 27 14:24:57.088: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod pod-projected-configmaps-312acce2-d611-405f-8c43-0f0696152626 container agnhost-container: +STEP: delete the pod +Oct 27 14:24:57.174: INFO: Waiting for pod pod-projected-configmaps-312acce2-d611-405f-8c43-0f0696152626 to disappear +Oct 27 14:24:57.178: INFO: Pod pod-projected-configmaps-312acce2-d611-405f-8c43-0f0696152626 no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:24:57.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-6300" for this suite. +•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":346,"completed":83,"skipped":1543,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + should be able to convert a non homogeneous list of CRs [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:24:57.191: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename crd-webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-webhook-8262 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 +STEP: Setting up server cert +STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication +STEP: Deploying the custom resource conversion webhook pod +STEP: Wait for the deployment to be ready +Oct 27 14:24:57.662: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 27 14:25:00.687: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 +[It] should be able to convert a non homogeneous list of CRs [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:25:00.692: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Creating a v1 custom resource +STEP: Create a v2 custom resource +STEP: List CRs in v1 +STEP: List CRs in v2 +[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:25:04.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-webhook-8262" for this suite. +[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 +•{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":346,"completed":84,"skipped":1568,"failed":0} +SSS +------------------------------ +[sig-node] Probing container + should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:25:04.380: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-probe +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-1377 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 +[It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating pod liveness-fec8a9b0-8878-4288-8e49-aa33e67f1ad8 in namespace container-probe-1377 +Oct 27 14:25:06.552: INFO: Started pod liveness-fec8a9b0-8878-4288-8e49-aa33e67f1ad8 in namespace container-probe-1377 +STEP: checking the pod's current state and verifying that restartCount is present +Oct 27 14:25:06.557: INFO: Initial restart count of pod liveness-fec8a9b0-8878-4288-8e49-aa33e67f1ad8 is 0 +STEP: deleting the pod +[AfterEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:29:07.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-1377" for this suite. +•{"msg":"PASSED [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":346,"completed":85,"skipped":1571,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook + should execute poststart exec hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Container Lifecycle Hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:29:07.668: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-lifecycle-hook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-lifecycle-hook-4262 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] when create a pod with lifecycle hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 +STEP: create the container to handle the HTTPGet hook request. +Oct 27 14:29:07.844: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:29:09.850: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) +[It] should execute poststart exec hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the pod with lifecycle hook +Oct 27 14:29:09.871: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:29:11.878: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = true) +STEP: check poststart hook +STEP: delete the pod with lifecycle hook +Oct 27 14:29:11.939: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Oct 27 14:29:11.944: INFO: Pod pod-with-poststart-exec-hook still exists +Oct 27 14:29:13.945: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Oct 27 14:29:13.950: INFO: Pod pod-with-poststart-exec-hook no longer exists +[AfterEach] [sig-node] Container Lifecycle Hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:29:13.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-lifecycle-hook-4262" for this suite. +•{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":346,"completed":86,"skipped":1610,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Proxy version v1 + A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] version v1 + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:29:13.963: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename proxy +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in proxy-7538 +STEP: Waiting for a default service account to be provisioned in namespace +[It] A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:29:14.110: INFO: Creating pod... +Oct 27 14:29:14.126: INFO: Pod Quantity: 1 Status: Pending +Oct 27 14:29:15.133: INFO: Pod Quantity: 1 Status: Pending +Oct 27 14:29:16.132: INFO: Pod Status: Running +Oct 27 14:29:16.132: INFO: Creating service... +Oct 27 14:29:16.141: INFO: Starting http.Client for https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com/api/v1/namespaces/proxy-7538/pods/agnhost/proxy/some/path/with/DELETE +Oct 27 14:29:16.202: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE +Oct 27 14:29:16.202: INFO: Starting http.Client for https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com/api/v1/namespaces/proxy-7538/pods/agnhost/proxy/some/path/with/GET +Oct 27 14:29:16.211: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET +Oct 27 14:29:16.211: INFO: Starting http.Client for https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com/api/v1/namespaces/proxy-7538/pods/agnhost/proxy/some/path/with/HEAD +Oct 27 14:29:16.220: INFO: http.Client request:HEAD | StatusCode:200 +Oct 27 14:29:16.220: INFO: Starting http.Client for https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com/api/v1/namespaces/proxy-7538/pods/agnhost/proxy/some/path/with/OPTIONS +Oct 27 14:29:16.309: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS +Oct 27 14:29:16.309: INFO: Starting http.Client for https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com/api/v1/namespaces/proxy-7538/pods/agnhost/proxy/some/path/with/PATCH +Oct 27 14:29:16.317: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH +Oct 27 14:29:16.317: INFO: Starting http.Client for https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com/api/v1/namespaces/proxy-7538/pods/agnhost/proxy/some/path/with/POST +Oct 27 14:29:16.325: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST +Oct 27 14:29:16.325: INFO: Starting http.Client for https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com/api/v1/namespaces/proxy-7538/pods/agnhost/proxy/some/path/with/PUT +Oct 27 14:29:16.336: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT +Oct 27 14:29:16.337: INFO: Starting http.Client for https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com/api/v1/namespaces/proxy-7538/services/test-service/proxy/some/path/with/DELETE +Oct 27 14:29:16.345: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE +Oct 27 14:29:16.345: INFO: Starting http.Client for https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com/api/v1/namespaces/proxy-7538/services/test-service/proxy/some/path/with/GET +Oct 27 14:29:16.355: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET +Oct 27 14:29:16.355: INFO: Starting http.Client for https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com/api/v1/namespaces/proxy-7538/services/test-service/proxy/some/path/with/HEAD +Oct 27 14:29:16.363: INFO: http.Client request:HEAD | StatusCode:200 +Oct 27 14:29:16.363: INFO: Starting http.Client for https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com/api/v1/namespaces/proxy-7538/services/test-service/proxy/some/path/with/OPTIONS +Oct 27 14:29:16.373: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS +Oct 27 14:29:16.373: INFO: Starting http.Client for https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com/api/v1/namespaces/proxy-7538/services/test-service/proxy/some/path/with/PATCH +Oct 27 14:29:16.381: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH +Oct 27 14:29:16.381: INFO: Starting http.Client for https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com/api/v1/namespaces/proxy-7538/services/test-service/proxy/some/path/with/POST +Oct 27 14:29:16.390: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST +Oct 27 14:29:16.390: INFO: Starting http.Client for https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com/api/v1/namespaces/proxy-7538/services/test-service/proxy/some/path/with/PUT +Oct 27 14:29:16.399: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT +[AfterEach] version v1 + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:29:16.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "proxy-7538" for this suite. +•{"msg":"PASSED [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]","total":346,"completed":87,"skipped":1636,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints + verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:29:16.413: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename sched-preemption +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-preemption-8668 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 +Oct 27 14:29:16.576: INFO: Waiting up to 1m0s for all nodes to be ready +Oct 27 14:30:16.627: INFO: Waiting for terminating namespaces to be deleted... +[BeforeEach] PriorityClass endpoints + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:30:16.632: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename sched-preemption-path +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-preemption-path-8994 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] PriorityClass endpoints + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:679 +[It] verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:30:16.798: INFO: PriorityClass.scheduling.k8s.io "p1" is invalid: Value: Forbidden: may not be changed in an update. +Oct 27 14:30:16.803: INFO: PriorityClass.scheduling.k8s.io "p2" is invalid: Value: Forbidden: may not be changed in an update. +[AfterEach] PriorityClass endpoints + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:30:16.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-preemption-path-8994" for this suite. +[AfterEach] PriorityClass endpoints + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:693 +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:30:16.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-preemption-8668" for this suite. +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 +•{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]","total":346,"completed":88,"skipped":1649,"failed":0} +S +------------------------------ +[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook + should execute poststart http hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Container Lifecycle Hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:30:16.892: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-lifecycle-hook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-lifecycle-hook-8936 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] when create a pod with lifecycle hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 +STEP: create the container to handle the HTTPGet hook request. +Oct 27 14:30:17.066: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:30:19.071: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) +[It] should execute poststart http hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the pod with lifecycle hook +Oct 27 14:30:19.091: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:30:21.097: INFO: The status of Pod pod-with-poststart-http-hook is Running (Ready = true) +STEP: check poststart hook +STEP: delete the pod with lifecycle hook +Oct 27 14:30:21.164: INFO: Waiting for pod pod-with-poststart-http-hook to disappear +Oct 27 14:30:21.169: INFO: Pod pod-with-poststart-http-hook still exists +Oct 27 14:30:23.170: INFO: Waiting for pod pod-with-poststart-http-hook to disappear +Oct 27 14:30:23.176: INFO: Pod pod-with-poststart-http-hook still exists +Oct 27 14:30:25.169: INFO: Waiting for pod pod-with-poststart-http-hook to disappear +Oct 27 14:30:25.177: INFO: Pod pod-with-poststart-http-hook no longer exists +[AfterEach] [sig-node] Container Lifecycle Hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:30:25.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-lifecycle-hook-8936" for this suite. +•{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":346,"completed":89,"skipped":1650,"failed":0} +S +------------------------------ +[sig-api-machinery] Watchers + should observe add, update, and delete watch notifications on configmaps [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:30:25.192: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename watch +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in watch-749 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should observe add, update, and delete watch notifications on configmaps [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a watch on configmaps with label A +STEP: creating a watch on configmaps with label B +STEP: creating a watch on configmaps with label A or B +STEP: creating a configmap with label A and ensuring the correct watchers observe the notification +Oct 27 14:30:25.358: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-749 83f708e9-8a2e-41cc-ada7-9e80901b5221 15626 0 2021-10-27 14:30:25 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-10-27 14:30:25 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 27 14:30:25.358: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-749 83f708e9-8a2e-41cc-ada7-9e80901b5221 15626 0 2021-10-27 14:30:25 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-10-27 14:30:25 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: modifying configmap A and ensuring the correct watchers observe the notification +Oct 27 14:30:35.369: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-749 83f708e9-8a2e-41cc-ada7-9e80901b5221 15700 0 2021-10-27 14:30:25 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-10-27 14:30:35 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 27 14:30:35.369: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-749 83f708e9-8a2e-41cc-ada7-9e80901b5221 15700 0 2021-10-27 14:30:25 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-10-27 14:30:35 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: modifying configmap A again and ensuring the correct watchers observe the notification +Oct 27 14:30:45.382: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-749 83f708e9-8a2e-41cc-ada7-9e80901b5221 15744 0 2021-10-27 14:30:25 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-10-27 14:30:35 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 27 14:30:45.382: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-749 83f708e9-8a2e-41cc-ada7-9e80901b5221 15744 0 2021-10-27 14:30:25 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-10-27 14:30:35 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: deleting configmap A and ensuring the correct watchers observe the notification +Oct 27 14:30:55.389: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-749 83f708e9-8a2e-41cc-ada7-9e80901b5221 15787 0 2021-10-27 14:30:25 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-10-27 14:30:35 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 27 14:30:55.389: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-749 83f708e9-8a2e-41cc-ada7-9e80901b5221 15787 0 2021-10-27 14:30:25 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-10-27 14:30:35 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: creating a configmap with label B and ensuring the correct watchers observe the notification +Oct 27 14:31:05.398: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-749 6558db59-0797-482f-944b-c3f9ed7893b1 15831 0 2021-10-27 14:31:05 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-10-27 14:31:05 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 27 14:31:05.398: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-749 6558db59-0797-482f-944b-c3f9ed7893b1 15831 0 2021-10-27 14:31:05 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-10-27 14:31:05 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: deleting configmap B and ensuring the correct watchers observe the notification +Oct 27 14:31:15.405: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-749 6558db59-0797-482f-944b-c3f9ed7893b1 15874 0 2021-10-27 14:31:05 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-10-27 14:31:05 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 27 14:31:15.405: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-749 6558db59-0797-482f-944b-c3f9ed7893b1 15874 0 2021-10-27 14:31:05 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-10-27 14:31:05 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +[AfterEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:31:25.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "watch-749" for this suite. +•{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":346,"completed":90,"skipped":1651,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Downward API + should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:31:25.423: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-4781 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward api env vars +Oct 27 14:31:25.588: INFO: Waiting up to 5m0s for pod "downward-api-b9e658f1-0e65-4b02-a1c7-00a5bd742311" in namespace "downward-api-4781" to be "Succeeded or Failed" +Oct 27 14:31:25.596: INFO: Pod "downward-api-b9e658f1-0e65-4b02-a1c7-00a5bd742311": Phase="Pending", Reason="", readiness=false. Elapsed: 7.120003ms +Oct 27 14:31:27.602: INFO: Pod "downward-api-b9e658f1-0e65-4b02-a1c7-00a5bd742311": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013459969s +STEP: Saw pod success +Oct 27 14:31:27.602: INFO: Pod "downward-api-b9e658f1-0e65-4b02-a1c7-00a5bd742311" satisfied condition "Succeeded or Failed" +Oct 27 14:31:27.607: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod downward-api-b9e658f1-0e65-4b02-a1c7-00a5bd742311 container dapi-container: +STEP: delete the pod +Oct 27 14:31:27.626: INFO: Waiting for pod downward-api-b9e658f1-0e65-4b02-a1c7-00a5bd742311 to disappear +Oct 27 14:31:27.630: INFO: Pod downward-api-b9e658f1-0e65-4b02-a1c7-00a5bd742311 no longer exists +[AfterEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:31:27.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-4781" for this suite. +•{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":346,"completed":91,"skipped":1678,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:31:27.643: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-181 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name projected-configmap-test-volume-5c9a4dab-3d1b-4555-8d41-7271bba52b58 +STEP: Creating a pod to test consume configMaps +Oct 27 14:31:27.816: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e43787dd-33f7-4286-b9d9-eca37d83608c" in namespace "projected-181" to be "Succeeded or Failed" +Oct 27 14:31:27.820: INFO: Pod "pod-projected-configmaps-e43787dd-33f7-4286-b9d9-eca37d83608c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.260102ms +Oct 27 14:31:29.826: INFO: Pod "pod-projected-configmaps-e43787dd-33f7-4286-b9d9-eca37d83608c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010115721s +STEP: Saw pod success +Oct 27 14:31:29.826: INFO: Pod "pod-projected-configmaps-e43787dd-33f7-4286-b9d9-eca37d83608c" satisfied condition "Succeeded or Failed" +Oct 27 14:31:29.831: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod pod-projected-configmaps-e43787dd-33f7-4286-b9d9-eca37d83608c container agnhost-container: +STEP: delete the pod +Oct 27 14:31:29.849: INFO: Waiting for pod pod-projected-configmaps-e43787dd-33f7-4286-b9d9-eca37d83608c to disappear +Oct 27 14:31:29.853: INFO: Pod pod-projected-configmaps-e43787dd-33f7-4286-b9d9-eca37d83608c no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:31:29.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-181" for this suite. +•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":346,"completed":92,"skipped":1705,"failed":0} +S +------------------------------ +[sig-scheduling] LimitRange + should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-scheduling] LimitRange + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:31:29.866: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename limitrange +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in limitrange-9199 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a LimitRange +STEP: Setting up watch +STEP: Submitting a LimitRange +Oct 27 14:31:30.027: INFO: observed the limitRanges list +STEP: Verifying LimitRange creation was observed +STEP: Fetching the LimitRange to ensure it has proper values +Oct 27 14:31:30.035: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] +Oct 27 14:31:30.035: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] +STEP: Creating a Pod with no resource requirements +STEP: Ensuring Pod has resource requirements applied from LimitRange +Oct 27 14:31:30.050: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] +Oct 27 14:31:30.050: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] +STEP: Creating a Pod with partial resource requirements +STEP: Ensuring Pod has merged resource requirements applied from LimitRange +Oct 27 14:31:30.065: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] +Oct 27 14:31:30.065: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] +STEP: Failing to create a Pod with less than min resources +STEP: Failing to create a Pod with more than max resources +STEP: Updating a LimitRange +STEP: Verifying LimitRange updating is effective +STEP: Creating a Pod with less than former min resources +STEP: Failing to create a Pod with more than max resources +STEP: Deleting a LimitRange +STEP: Verifying the LimitRange was deleted +Oct 27 14:31:37.124: INFO: limitRange is already deleted +STEP: Creating a Pod with more than former max resources +[AfterEach] [sig-scheduling] LimitRange + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:31:37.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "limitrange-9199" for this suite. +•{"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":346,"completed":93,"skipped":1706,"failed":0} + +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + patching/updating a mutating webhook should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:31:37.149: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-5432 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 27 14:31:37.714: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 27 14:31:40.739: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] patching/updating a mutating webhook should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a mutating webhook configuration +STEP: Updating a mutating webhook configuration's rules to not include the create operation +STEP: Creating a configMap that should not be mutated +STEP: Patching a mutating webhook configuration's rules to include the create operation +STEP: Creating a configMap that should be mutated +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:31:40.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-5432" for this suite. +STEP: Destroying namespace "webhook-5432-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":346,"completed":94,"skipped":1706,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicaSet + should list and delete a collection of ReplicaSets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:31:40.957: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename replicaset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replicaset-2822 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should list and delete a collection of ReplicaSets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Create a ReplicaSet +STEP: Verify that the required pods have come up +Oct 27 14:31:41.114: INFO: Pod name sample-pod: Found 0 pods out of 3 +Oct 27 14:31:46.120: INFO: Pod name sample-pod: Found 3 pods out of 3 +STEP: ensuring each pod is running +Oct 27 14:31:46.161: INFO: Replica Status: {Replicas:3 FullyLabeledReplicas:3 ReadyReplicas:3 AvailableReplicas:3 ObservedGeneration:1 Conditions:[]} +STEP: Listing all ReplicaSets +STEP: DeleteCollection of the ReplicaSets +STEP: After DeleteCollection verify that ReplicaSets have been deleted +[AfterEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:31:46.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replicaset-2822" for this suite. +•{"msg":"PASSED [sig-apps] ReplicaSet should list and delete a collection of ReplicaSets [Conformance]","total":346,"completed":95,"skipped":1782,"failed":0} +SSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should include webhook resources in discovery documents [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:31:46.192: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-4932 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 27 14:31:46.649: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 27 14:31:49.676: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should include webhook resources in discovery documents [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: fetching the /apis discovery document +STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document +STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document +STEP: fetching the /apis/admissionregistration.k8s.io discovery document +STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document +STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document +STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:31:49.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-4932" for this suite. +STEP: Destroying namespace "webhook-4932-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":346,"completed":96,"skipped":1785,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-node] Pods + should support remote command execution over websockets [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:31:49.740: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename pods +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-2903 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:188 +[It] should support remote command execution over websockets [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:31:49.945: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: creating the pod +STEP: submitting the pod to kubernetes +Oct 27 14:31:49.963: INFO: The status of Pod pod-exec-websocket-48f589d8-3860-43fc-bfc1-c8ed67399f84 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:31:51.969: INFO: The status of Pod pod-exec-websocket-48f589d8-3860-43fc-bfc1-c8ed67399f84 is Running (Ready = true) +[AfterEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:31:52.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-2903" for this suite. +•{"msg":"PASSED [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":346,"completed":97,"skipped":1797,"failed":0} +SSS +------------------------------ +[sig-node] Kubelet when scheduling a busybox command in a pod + should print the output to logs [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:31:52.137: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubelet-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubelet-test-5412 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 +[It] should print the output to logs [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:31:52.303: INFO: The status of Pod busybox-scheduling-27bca5e5-c766-45c9-955d-2805db315af5 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:31:54.309: INFO: The status of Pod busybox-scheduling-27bca5e5-c766-45c9-955d-2805db315af5 is Running (Ready = true) +[AfterEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:31:54.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubelet-test-5412" for this suite. +•{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":346,"completed":98,"skipped":1800,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Deployment + should run the lifecycle of a Deployment [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:31:54.338: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename deployment +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in deployment-3441 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 +[It] should run the lifecycle of a Deployment [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a Deployment +STEP: waiting for Deployment to be created +STEP: waiting for all Replicas to be Ready +Oct 27 14:31:54.529: INFO: observed Deployment test-deployment in namespace deployment-3441 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Oct 27 14:31:54.529: INFO: observed Deployment test-deployment in namespace deployment-3441 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Oct 27 14:31:54.529: INFO: observed Deployment test-deployment in namespace deployment-3441 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Oct 27 14:31:54.529: INFO: observed Deployment test-deployment in namespace deployment-3441 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Oct 27 14:31:54.535: INFO: observed Deployment test-deployment in namespace deployment-3441 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Oct 27 14:31:54.535: INFO: observed Deployment test-deployment in namespace deployment-3441 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Oct 27 14:31:54.557: INFO: observed Deployment test-deployment in namespace deployment-3441 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Oct 27 14:31:54.557: INFO: observed Deployment test-deployment in namespace deployment-3441 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Oct 27 14:31:55.956: INFO: observed Deployment test-deployment in namespace deployment-3441 with ReadyReplicas 1 and labels map[test-deployment-static:true] +Oct 27 14:31:55.956: INFO: observed Deployment test-deployment in namespace deployment-3441 with ReadyReplicas 1 and labels map[test-deployment-static:true] +Oct 27 14:31:56.023: INFO: observed Deployment test-deployment in namespace deployment-3441 with ReadyReplicas 2 and labels map[test-deployment-static:true] +STEP: patching the Deployment +Oct 27 14:31:56.066: INFO: observed event type ADDED +STEP: waiting for Replicas to scale +Oct 27 14:31:56.070: INFO: observed Deployment test-deployment in namespace deployment-3441 with ReadyReplicas 0 +Oct 27 14:31:56.070: INFO: observed Deployment test-deployment in namespace deployment-3441 with ReadyReplicas 0 +Oct 27 14:31:56.070: INFO: observed Deployment test-deployment in namespace deployment-3441 with ReadyReplicas 0 +Oct 27 14:31:56.070: INFO: observed Deployment test-deployment in namespace deployment-3441 with ReadyReplicas 0 +Oct 27 14:31:56.070: INFO: observed Deployment test-deployment in namespace deployment-3441 with ReadyReplicas 0 +Oct 27 14:31:56.070: INFO: observed Deployment test-deployment in namespace deployment-3441 with ReadyReplicas 0 +Oct 27 14:31:56.070: INFO: observed Deployment test-deployment in namespace deployment-3441 with ReadyReplicas 0 +Oct 27 14:31:56.070: INFO: observed Deployment test-deployment in namespace deployment-3441 with ReadyReplicas 0 +Oct 27 14:31:56.070: INFO: observed Deployment test-deployment in namespace deployment-3441 with ReadyReplicas 1 +Oct 27 14:31:56.070: INFO: observed Deployment test-deployment in namespace deployment-3441 with ReadyReplicas 1 +Oct 27 14:31:56.070: INFO: observed Deployment test-deployment in namespace deployment-3441 with ReadyReplicas 2 +Oct 27 14:31:56.070: INFO: observed Deployment test-deployment in namespace deployment-3441 with ReadyReplicas 2 +Oct 27 14:31:56.070: INFO: observed Deployment test-deployment in namespace deployment-3441 with ReadyReplicas 2 +Oct 27 14:31:56.070: INFO: observed Deployment test-deployment in namespace deployment-3441 with ReadyReplicas 2 +Oct 27 14:31:56.071: INFO: observed Deployment test-deployment in namespace deployment-3441 with ReadyReplicas 2 +Oct 27 14:31:56.071: INFO: observed Deployment test-deployment in namespace deployment-3441 with ReadyReplicas 2 +Oct 27 14:31:56.159: INFO: observed Deployment test-deployment in namespace deployment-3441 with ReadyReplicas 2 +Oct 27 14:31:56.159: INFO: observed Deployment test-deployment in namespace deployment-3441 with ReadyReplicas 2 +Oct 27 14:31:56.271: INFO: observed Deployment test-deployment in namespace deployment-3441 with ReadyReplicas 1 +Oct 27 14:31:56.271: INFO: observed Deployment test-deployment in namespace deployment-3441 with ReadyReplicas 1 +Oct 27 14:31:56.362: INFO: observed Deployment test-deployment in namespace deployment-3441 with ReadyReplicas 1 +Oct 27 14:31:56.362: INFO: observed Deployment test-deployment in namespace deployment-3441 with ReadyReplicas 1 +Oct 27 14:31:57.023: INFO: observed Deployment test-deployment in namespace deployment-3441 with ReadyReplicas 2 +Oct 27 14:31:57.023: INFO: observed Deployment test-deployment in namespace deployment-3441 with ReadyReplicas 2 +Oct 27 14:31:57.037: INFO: observed Deployment test-deployment in namespace deployment-3441 with ReadyReplicas 1 +STEP: listing Deployments +Oct 27 14:31:57.042: INFO: Found test-deployment with labels: map[test-deployment:patched test-deployment-static:true] +STEP: updating the Deployment +Oct 27 14:31:57.053: INFO: observed Deployment test-deployment in namespace deployment-3441 with ReadyReplicas 1 +STEP: fetching the DeploymentStatus +Oct 27 14:31:57.063: INFO: observed Deployment test-deployment in namespace deployment-3441 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] +Oct 27 14:31:57.063: INFO: observed Deployment test-deployment in namespace deployment-3441 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] +Oct 27 14:31:57.063: INFO: observed Deployment test-deployment in namespace deployment-3441 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] +Oct 27 14:31:57.069: INFO: observed Deployment test-deployment in namespace deployment-3441 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] +Oct 27 14:31:57.075: INFO: observed Deployment test-deployment in namespace deployment-3441 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] +Oct 27 14:31:57.081: INFO: observed Deployment test-deployment in namespace deployment-3441 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] +Oct 27 14:31:58.032: INFO: observed Deployment test-deployment in namespace deployment-3441 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] +Oct 27 14:31:58.043: INFO: observed Deployment test-deployment in namespace deployment-3441 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] +Oct 27 14:31:58.049: INFO: observed Deployment test-deployment in namespace deployment-3441 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] +Oct 27 14:31:58.054: INFO: observed Deployment test-deployment in namespace deployment-3441 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] +Oct 27 14:31:58.064: INFO: observed Deployment test-deployment in namespace deployment-3441 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] +Oct 27 14:31:59.009: INFO: observed Deployment test-deployment in namespace deployment-3441 with ReadyReplicas 3 and labels map[test-deployment:updated test-deployment-static:true] +STEP: patching the DeploymentStatus +STEP: fetching the DeploymentStatus +Oct 27 14:31:59.037: INFO: observed Deployment test-deployment in namespace deployment-3441 with ReadyReplicas 1 +Oct 27 14:31:59.037: INFO: observed Deployment test-deployment in namespace deployment-3441 with ReadyReplicas 1 +Oct 27 14:31:59.037: INFO: observed Deployment test-deployment in namespace deployment-3441 with ReadyReplicas 1 +Oct 27 14:31:59.037: INFO: observed Deployment test-deployment in namespace deployment-3441 with ReadyReplicas 1 +Oct 27 14:31:59.037: INFO: observed Deployment test-deployment in namespace deployment-3441 with ReadyReplicas 1 +Oct 27 14:31:59.037: INFO: observed Deployment test-deployment in namespace deployment-3441 with ReadyReplicas 1 +Oct 27 14:31:59.037: INFO: observed Deployment test-deployment in namespace deployment-3441 with ReadyReplicas 2 +Oct 27 14:31:59.037: INFO: observed Deployment test-deployment in namespace deployment-3441 with ReadyReplicas 2 +Oct 27 14:31:59.037: INFO: observed Deployment test-deployment in namespace deployment-3441 with ReadyReplicas 2 +Oct 27 14:31:59.037: INFO: observed Deployment test-deployment in namespace deployment-3441 with ReadyReplicas 2 +Oct 27 14:31:59.037: INFO: observed Deployment test-deployment in namespace deployment-3441 with ReadyReplicas 2 +Oct 27 14:31:59.037: INFO: observed Deployment test-deployment in namespace deployment-3441 with ReadyReplicas 3 +STEP: deleting the Deployment +Oct 27 14:31:59.048: INFO: observed event type MODIFIED +Oct 27 14:31:59.048: INFO: observed event type MODIFIED +Oct 27 14:31:59.049: INFO: observed event type MODIFIED +Oct 27 14:31:59.049: INFO: observed event type MODIFIED +Oct 27 14:31:59.049: INFO: observed event type MODIFIED +Oct 27 14:31:59.049: INFO: observed event type MODIFIED +Oct 27 14:31:59.049: INFO: observed event type MODIFIED +Oct 27 14:31:59.049: INFO: observed event type MODIFIED +Oct 27 14:31:59.049: INFO: observed event type MODIFIED +Oct 27 14:31:59.049: INFO: observed event type MODIFIED +Oct 27 14:31:59.049: INFO: observed event type MODIFIED +Oct 27 14:31:59.049: INFO: observed event type MODIFIED +Oct 27 14:31:59.049: INFO: observed event type MODIFIED +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 +Oct 27 14:31:59.055: INFO: Log out all the ReplicaSets if there is no deployment created +Oct 27 14:31:59.059: INFO: ReplicaSet "test-deployment-56c98d85f9": +&ReplicaSet{ObjectMeta:{test-deployment-56c98d85f9 deployment-3441 74df52f7-2fac-45cd-a176-36c6bb90fe07 16458 4 2021-10-27 14:31:56 +0000 UTC map[pod-template-hash:56c98d85f9 test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-deployment 0c6fdb92-fe1f-4eb7-b6b5-91c736784029 0xc005f1bd87 0xc005f1bd88}] [] [{kube-controller-manager Update apps/v1 2021-10-27 14:31:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c6fdb92-fe1f-4eb7-b6b5-91c736784029\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 14:31:59 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 56c98d85f9,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:56c98d85f9 test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/pause:3.5 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc005f1be10 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:4,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} + +Oct 27 14:31:59.065: INFO: pod: "test-deployment-56c98d85f9-22hn4": +&Pod{ObjectMeta:{test-deployment-56c98d85f9-22hn4 test-deployment-56c98d85f9- deployment-3441 93f08d4e-d3d5-4913-89d4-48d674c9ad15 16451 0 2021-10-27 14:31:57 +0000 UTC 2021-10-27 14:31:59 +0000 UTC 0xc004f18388 map[pod-template-hash:56c98d85f9 test-deployment-static:true] map[cni.projectcalico.org/containerID:fbb7ec04aa1cd16398299b3908d31558fa944057e0b4b88783373944b9e810c5 cni.projectcalico.org/podIP:172.16.0.46/32 cni.projectcalico.org/podIPs:172.16.0.46/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet test-deployment-56c98d85f9 74df52f7-2fac-45cd-a176-36c6bb90fe07 0xc004f18427 0xc004f18428}] [] [{calico Update v1 2021-10-27 14:31:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kube-controller-manager Update v1 2021-10-27 14:31:57 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"74df52f7-2fac-45cd-a176-36c6bb90fe07\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 14:31:58 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.16.0.46\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-cgsbw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:k8s.gcr.io/pause:3.5,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmanu-jzf.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cgsbw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:izgw81stpxs0bun38i01tfz,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:31:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:31:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:31:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:31:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.8.34,PodIP:172.16.0.46,StartTime:2021-10-27 14:31:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-27 14:31:58 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/pause:3.5,ImageID:k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07,ContainerID:containerd://04ff0c36e94b42d34119060bef46412e7a1b666d3edaa3e8c9f38abcf4de2789,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.16.0.46,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + +Oct 27 14:31:59.065: INFO: pod: "test-deployment-56c98d85f9-fzd5j": +&Pod{ObjectMeta:{test-deployment-56c98d85f9-fzd5j test-deployment-56c98d85f9- deployment-3441 7969774f-ce8e-450d-a456-bedeca2c22c9 16456 0 2021-10-27 14:31:56 +0000 UTC 2021-10-27 14:32:00 +0000 UTC 0xc004f18700 map[pod-template-hash:56c98d85f9 test-deployment-static:true] map[cni.projectcalico.org/containerID:b9518727d181fed50a2df8dd505fc2d18e33db237963b500b0be41ee63765e24 cni.projectcalico.org/podIP:172.16.1.115/32 cni.projectcalico.org/podIPs:172.16.1.115/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet test-deployment-56c98d85f9 74df52f7-2fac-45cd-a176-36c6bb90fe07 0xc004f18797 0xc004f18798}] [] [{calico Update v1 2021-10-27 14:31:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kube-controller-manager Update v1 2021-10-27 14:31:56 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"74df52f7-2fac-45cd-a176-36c6bb90fe07\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 14:31:57 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.16.1.115\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-2g4w6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:k8s.gcr.io/pause:3.5,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmanu-jzf.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2g4w6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:izgw89f23rpcwrl79tpgp1z,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:31:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:31:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:31:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:31:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.8.35,PodIP:172.16.1.115,StartTime:2021-10-27 14:31:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-27 14:31:56 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/pause:3.5,ImageID:k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07,ContainerID:containerd://84972df5dc3d4e2b70df2a407e6c53131e0bd0b07612c3f6ae51de2ef1282d49,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.16.1.115,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + +Oct 27 14:31:59.065: INFO: ReplicaSet "test-deployment-855f7994f9": +&ReplicaSet{ObjectMeta:{test-deployment-855f7994f9 deployment-3441 67b6f5dc-f550-4645-b56e-32e21d3b0bb3 16394 3 2021-10-27 14:31:54 +0000 UTC map[pod-template-hash:855f7994f9 test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-deployment 0c6fdb92-fe1f-4eb7-b6b5-91c736784029 0xc005f1be77 0xc005f1be78}] [] [{kube-controller-manager Update apps/v1 2021-10-27 14:31:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c6fdb92-fe1f-4eb7-b6b5-91c736784029\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 14:31:57 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 855f7994f9,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:855f7994f9 test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc005f1bf00 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} + +Oct 27 14:31:59.070: INFO: ReplicaSet "test-deployment-d4dfddfbf": +&ReplicaSet{ObjectMeta:{test-deployment-d4dfddfbf deployment-3441 671547ff-8cd5-4883-a6ad-70aea917214c 16453 2 2021-10-27 14:31:57 +0000 UTC map[pod-template-hash:d4dfddfbf test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:3] [{apps/v1 Deployment test-deployment 0c6fdb92-fe1f-4eb7-b6b5-91c736784029 0xc005f1bf67 0xc005f1bf68}] [] [{kube-controller-manager Update apps/v1 2021-10-27 14:31:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c6fdb92-fe1f-4eb7-b6b5-91c736784029\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 14:31:58 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: d4dfddfbf,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:d4dfddfbf test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc005f1bff0 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:2,FullyLabeledReplicas:2,ObservedGeneration:2,ReadyReplicas:2,AvailableReplicas:2,Conditions:[]ReplicaSetCondition{},},} + +Oct 27 14:31:59.075: INFO: pod: "test-deployment-d4dfddfbf-69dd4": +&Pod{ObjectMeta:{test-deployment-d4dfddfbf-69dd4 test-deployment-d4dfddfbf- deployment-3441 86bf5734-ca99-47bf-8b6c-9d40c0c1e6b8 16424 0 2021-10-27 14:31:57 +0000 UTC map[pod-template-hash:d4dfddfbf test-deployment-static:true] map[cni.projectcalico.org/containerID:961960fb58dbd4ebe31370fe96661f3cbe6131c315d0d2ba58af64fedebcf113 cni.projectcalico.org/podIP:172.16.1.116/32 cni.projectcalico.org/podIPs:172.16.1.116/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet test-deployment-d4dfddfbf 671547ff-8cd5-4883-a6ad-70aea917214c 0xc004f19e87 0xc004f19e88}] [] [{calico Update v1 2021-10-27 14:31:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kube-controller-manager Update v1 2021-10-27 14:31:57 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"671547ff-8cd5-4883-a6ad-70aea917214c\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 14:31:58 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.16.1.116\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-mshc2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmanu-jzf.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mshc2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:izgw89f23rpcwrl79tpgp1z,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:31:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:31:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:31:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:31:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.8.35,PodIP:172.16.1.116,StartTime:2021-10-27 14:31:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-27 14:31:57 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://ab5167fc51ef26e661dc07c63e80655a011f52add93aa0b9823311db1087f4a7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.16.1.116,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + +Oct 27 14:31:59.076: INFO: pod: "test-deployment-d4dfddfbf-h2nfz": +&Pod{ObjectMeta:{test-deployment-d4dfddfbf-h2nfz test-deployment-d4dfddfbf- deployment-3441 70943846-b20b-4161-94e0-ef8f81ae35ef 16452 0 2021-10-27 14:31:58 +0000 UTC map[pod-template-hash:d4dfddfbf test-deployment-static:true] map[cni.projectcalico.org/containerID:e49beaeaf0470f3c9a1313df720601257fdbc6bb7608f8c7e3985604a680dfb5 cni.projectcalico.org/podIP:172.16.0.47/32 cni.projectcalico.org/podIPs:172.16.0.47/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet test-deployment-d4dfddfbf 671547ff-8cd5-4883-a6ad-70aea917214c 0xc0045480a7 0xc0045480a8}] [] [{calico Update v1 2021-10-27 14:31:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kube-controller-manager Update v1 2021-10-27 14:31:58 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"671547ff-8cd5-4883-a6ad-70aea917214c\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 14:31:58 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.16.0.47\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-vgdpx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmanu-jzf.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vgdpx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:izgw81stpxs0bun38i01tfz,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:31:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:31:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:31:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:31:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.8.34,PodIP:172.16.0.47,StartTime:2021-10-27 14:31:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-27 14:31:58 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://94356f375435cda45ed0c683a17f522d822149bb4ca7c296611dc4ddb1854f7d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.16.0.47,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:31:59.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-3441" for this suite. +•{"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":346,"completed":99,"skipped":1831,"failed":0} +SS +------------------------------ +[sig-node] PodTemplates + should delete a collection of pod templates [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] PodTemplates + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:31:59.086: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename podtemplate +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in podtemplate-2366 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should delete a collection of pod templates [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Create set of pod templates +Oct 27 14:31:59.327: INFO: created test-podtemplate-1 +Oct 27 14:31:59.333: INFO: created test-podtemplate-2 +Oct 27 14:31:59.338: INFO: created test-podtemplate-3 +STEP: get a list of pod templates with a label in the current namespace +STEP: delete collection of pod templates +Oct 27 14:31:59.342: INFO: requesting DeleteCollection of pod templates +STEP: check that the list of pod templates matches the requested quantity +Oct 27 14:31:59.354: INFO: requesting list of pod templates to confirm quantity +[AfterEach] [sig-node] PodTemplates + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:31:59.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "podtemplate-2366" for this suite. +•{"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":346,"completed":100,"skipped":1833,"failed":0} +SSS +------------------------------ +[sig-node] Kubelet when scheduling a busybox Pod with hostAliases + should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:31:59.369: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubelet-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubelet-test-2416 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 +[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:31:59.528: INFO: The status of Pod busybox-host-aliasesc0f47993-e99b-4b75-a4cc-4c57ca1068aa is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:32:01.534: INFO: The status of Pod busybox-host-aliasesc0f47993-e99b-4b75-a4cc-4c57ca1068aa is Running (Ready = true) +[AfterEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:32:01.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubelet-test-2416" for this suite. +•{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":101,"skipped":1836,"failed":0} +SSSSSSS +------------------------------ +[sig-apps] ReplicationController + should release no longer matching pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:32:01.563: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename replication-controller +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replication-controller-8758 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 +[It] should release no longer matching pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Given a ReplicationController is created +STEP: When the matched label of one of its pods change +Oct 27 14:32:01.724: INFO: Pod name pod-release: Found 0 pods out of 1 +Oct 27 14:32:06.731: INFO: Pod name pod-release: Found 1 pods out of 1 +STEP: Then the pod is released +[AfterEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:32:07.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replication-controller-8758" for this suite. +•{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":346,"completed":102,"skipped":1843,"failed":0} +SSSS +------------------------------ +[sig-node] Security Context + should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:32:07.767: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename security-context +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-5093 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser +Oct 27 14:32:07.929: INFO: Waiting up to 5m0s for pod "security-context-b825f9ee-7a62-46da-b41b-697183aa9c5d" in namespace "security-context-5093" to be "Succeeded or Failed" +Oct 27 14:32:07.934: INFO: Pod "security-context-b825f9ee-7a62-46da-b41b-697183aa9c5d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.576654ms +Oct 27 14:32:09.939: INFO: Pod "security-context-b825f9ee-7a62-46da-b41b-697183aa9c5d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010118218s +STEP: Saw pod success +Oct 27 14:32:09.939: INFO: Pod "security-context-b825f9ee-7a62-46da-b41b-697183aa9c5d" satisfied condition "Succeeded or Failed" +Oct 27 14:32:09.944: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod security-context-b825f9ee-7a62-46da-b41b-697183aa9c5d container test-container: +STEP: delete the pod +Oct 27 14:32:09.962: INFO: Waiting for pod security-context-b825f9ee-7a62-46da-b41b-697183aa9c5d to disappear +Oct 27 14:32:09.966: INFO: Pod security-context-b825f9ee-7a62-46da-b41b-697183aa9c5d no longer exists +[AfterEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:32:09.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "security-context-5093" for this suite. +•{"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":346,"completed":103,"skipped":1847,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Watchers + should receive events on concurrent watches in same order [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:32:09.979: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename watch +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in watch-8984 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should receive events on concurrent watches in same order [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: getting a starting resourceVersion +STEP: starting a background goroutine to produce watch events +STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order +[AfterEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:32:14.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "watch-8984" for this suite. +•{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":346,"completed":104,"skipped":1870,"failed":0} +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces + should list and delete a collection of PodDisruptionBudgets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:32:14.986: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename disruption +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in disruption-3955 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 +[BeforeEach] Listing PodDisruptionBudgets for all namespaces + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:32:15.133: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename disruption-2 +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in disruption-2-1134 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should list and delete a collection of PodDisruptionBudgets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Waiting for the pdb to be processed +STEP: Waiting for the pdb to be processed +STEP: Waiting for the pdb to be processed +STEP: listing a collection of PDBs across all namespaces +STEP: listing a collection of PDBs in namespace disruption-3955 +STEP: deleting a collection of PDBs +STEP: Waiting for the PDB collection to be deleted +[AfterEach] Listing PodDisruptionBudgets for all namespaces + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:32:15.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "disruption-2-1134" for this suite. +[AfterEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:32:15.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "disruption-3955" for this suite. +•{"msg":"PASSED [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]","total":346,"completed":105,"skipped":1890,"failed":0} +SS +------------------------------ +[sig-network] Services + should complete a service status lifecycle [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:32:15.355: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-8987 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should complete a service status lifecycle [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a Service +STEP: watching for the Service to be added +Oct 27 14:32:15.522: INFO: Found Service test-service-565b8 in namespace services-8987 with labels: map[test-service-static:true] & ports [{http TCP 80 {0 80 } 0}] +Oct 27 14:32:15.522: INFO: Service test-service-565b8 created +STEP: Getting /status +Oct 27 14:32:15.527: INFO: Service test-service-565b8 has LoadBalancer: {[]} +STEP: patching the ServiceStatus +STEP: watching for the Service to be patched +Oct 27 14:32:15.537: INFO: observed Service test-service-565b8 in namespace services-8987 with annotations: map[] & LoadBalancer: {[]} +Oct 27 14:32:15.537: INFO: Found Service test-service-565b8 in namespace services-8987 with annotations: map[patchedstatus:true] & LoadBalancer: {[{203.0.113.1 []}]} +Oct 27 14:32:15.537: INFO: Service test-service-565b8 has service status patched +STEP: updating the ServiceStatus +Oct 27 14:32:15.546: INFO: updatedStatus.Conditions: []v1.Condition{v1.Condition{Type:"StatusUpdate", Status:"True", ObservedGeneration:0, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"E2E", Message:"Set from e2e test"}} +STEP: watching for the Service to be updated +Oct 27 14:32:15.549: INFO: Observed Service test-service-565b8 in namespace services-8987 with annotations: map[] & Conditions: {[]} +Oct 27 14:32:15.549: INFO: Observed event: &Service{ObjectMeta:{test-service-565b8 services-8987 d0a6fdc3-e8b9-4711-b86d-4c11372d67ea 16807 0 2021-10-27 14:32:15 +0000 UTC map[test-service-static:true] map[patchedstatus:true] [] [] [{e2e.test Update v1 2021-10-27 14:32:15 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:test-service-static":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:sessionAffinity":{},"f:type":{}}} } {e2e.test Update v1 2021-10-27 14:32:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:patchedstatus":{}}},"f:status":{"f:loadBalancer":{"f:ingress":{}}}} status}]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{0 80 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:172.31.193.7,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[172.31.193.7],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{LoadBalancerIngress{IP:203.0.113.1,Hostname:,Ports:[]PortStatus{},},},},Conditions:[]Condition{},},} +Oct 27 14:32:15.549: INFO: Found Service test-service-565b8 in namespace services-8987 with annotations: map[patchedstatus:true] & Conditions: [{StatusUpdate True 0 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] +Oct 27 14:32:15.550: INFO: Service test-service-565b8 has service status updated +STEP: patching the service +STEP: watching for the Service to be patched +Oct 27 14:32:15.559: INFO: observed Service test-service-565b8 in namespace services-8987 with labels: map[test-service-static:true] +Oct 27 14:32:15.559: INFO: observed Service test-service-565b8 in namespace services-8987 with labels: map[test-service-static:true] +Oct 27 14:32:15.559: INFO: observed Service test-service-565b8 in namespace services-8987 with labels: map[test-service-static:true] +Oct 27 14:32:15.559: INFO: Found Service test-service-565b8 in namespace services-8987 with labels: map[test-service:patched test-service-static:true] +Oct 27 14:32:15.559: INFO: Service test-service-565b8 patched +STEP: deleting the service +STEP: watching for the Service to be deleted +Oct 27 14:32:15.572: INFO: Observed event: ADDED +Oct 27 14:32:15.572: INFO: Observed event: MODIFIED +Oct 27 14:32:15.572: INFO: Observed event: MODIFIED +Oct 27 14:32:15.572: INFO: Observed event: MODIFIED +Oct 27 14:32:15.572: INFO: Found Service test-service-565b8 in namespace services-8987 with labels: map[test-service:patched test-service-static:true] & annotations: map[patchedstatus:true] +Oct 27 14:32:15.572: INFO: Service test-service-565b8 deleted +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:32:15.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-8987" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should complete a service status lifecycle [Conformance]","total":346,"completed":106,"skipped":1892,"failed":0} +S +------------------------------ +[sig-node] Variable Expansion + should allow substituting values in a container's command [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:32:15.583: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename var-expansion +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in var-expansion-1261 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should allow substituting values in a container's command [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test substitution in container's command +Oct 27 14:32:15.742: INFO: Waiting up to 5m0s for pod "var-expansion-db6aeb56-d4c2-4090-9ccf-74568857e595" in namespace "var-expansion-1261" to be "Succeeded or Failed" +Oct 27 14:32:15.747: INFO: Pod "var-expansion-db6aeb56-d4c2-4090-9ccf-74568857e595": Phase="Pending", Reason="", readiness=false. Elapsed: 4.802717ms +Oct 27 14:32:17.753: INFO: Pod "var-expansion-db6aeb56-d4c2-4090-9ccf-74568857e595": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010612829s +STEP: Saw pod success +Oct 27 14:32:17.753: INFO: Pod "var-expansion-db6aeb56-d4c2-4090-9ccf-74568857e595" satisfied condition "Succeeded or Failed" +Oct 27 14:32:17.765: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod var-expansion-db6aeb56-d4c2-4090-9ccf-74568857e595 container dapi-container: +STEP: delete the pod +Oct 27 14:32:17.795: INFO: Waiting for pod var-expansion-db6aeb56-d4c2-4090-9ccf-74568857e595 to disappear +Oct 27 14:32:17.800: INFO: Pod var-expansion-db6aeb56-d4c2-4090-9ccf-74568857e595 no longer exists +[AfterEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:32:17.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-1261" for this suite. +•{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":346,"completed":107,"skipped":1893,"failed":0} +SSSSSS +------------------------------ +[sig-storage] EmptyDir wrapper volumes + should not conflict [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir wrapper volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:32:17.813: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir-wrapper +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-wrapper-4677 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not conflict [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:32:17.997: INFO: The status of Pod pod-secrets-90749785-5914-4668-b1a1-48daaf2e2981 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:32:20.003: INFO: The status of Pod pod-secrets-90749785-5914-4668-b1a1-48daaf2e2981 is Running (Ready = true) +STEP: Cleaning up the secret +STEP: Cleaning up the configmap +STEP: Cleaning up the pod +[AfterEach] [sig-storage] EmptyDir wrapper volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:32:20.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-wrapper-4677" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":346,"completed":108,"skipped":1899,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Secrets + should patch a secret [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:32:20.039: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-6847 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should patch a secret [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a secret +STEP: listing secrets in all namespaces to ensure that there are more than zero +STEP: patching the secret +STEP: deleting the secret using a LabelSelector +STEP: listing secrets in all namespaces, searching for label name and value in patch +[AfterEach] [sig-node] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:32:20.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-6847" for this suite. +•{"msg":"PASSED [sig-node] Secrets should patch a secret [Conformance]","total":346,"completed":109,"skipped":1935,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Pods + should be updated [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:32:20.235: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename pods +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-8217 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:188 +[It] should be updated [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pod +STEP: submitting the pod to kubernetes +Oct 27 14:32:20.403: INFO: The status of Pod pod-update-157a5ed4-541f-4b5b-b0a3-3c8a5299c4ab is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:32:22.409: INFO: The status of Pod pod-update-157a5ed4-541f-4b5b-b0a3-3c8a5299c4ab is Running (Ready = true) +STEP: verifying the pod is in kubernetes +STEP: updating the pod +Oct 27 14:32:22.930: INFO: Successfully updated pod "pod-update-157a5ed4-541f-4b5b-b0a3-3c8a5299c4ab" +STEP: verifying the updated pod is in kubernetes +Oct 27 14:32:22.939: INFO: Pod update OK +[AfterEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:32:22.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-8217" for this suite. +•{"msg":"PASSED [sig-node] Pods should be updated [NodeConformance] [Conformance]","total":346,"completed":110,"skipped":1951,"failed":0} +SSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition + listing custom resource definition objects works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:32:22.959: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename custom-resource-definition +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in custom-resource-definition-7010 +STEP: Waiting for a default service account to be provisioned in namespace +[It] listing custom resource definition objects works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:32:23.107: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:32:30.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "custom-resource-definition-7010" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":346,"completed":111,"skipped":1959,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for CRD without validation schema [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:32:30.273: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-8687 +STEP: Waiting for a default service account to be provisioned in namespace +[It] works for CRD without validation schema [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:32:30.668: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: client-side validation (kubectl create and apply) allows request with any unknown properties +Oct 27 14:32:42.281: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-8687 --namespace=crd-publish-openapi-8687 create -f -' +Oct 27 14:32:42.809: INFO: stderr: "" +Oct 27 14:32:42.809: INFO: stdout: "e2e-test-crd-publish-openapi-2972-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" +Oct 27 14:32:42.809: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-8687 --namespace=crd-publish-openapi-8687 delete e2e-test-crd-publish-openapi-2972-crds test-cr' +Oct 27 14:32:42.894: INFO: stderr: "" +Oct 27 14:32:42.895: INFO: stdout: "e2e-test-crd-publish-openapi-2972-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" +Oct 27 14:32:42.895: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-8687 --namespace=crd-publish-openapi-8687 apply -f -' +Oct 27 14:32:43.070: INFO: stderr: "" +Oct 27 14:32:43.070: INFO: stdout: "e2e-test-crd-publish-openapi-2972-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" +Oct 27 14:32:43.070: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-8687 --namespace=crd-publish-openapi-8687 delete e2e-test-crd-publish-openapi-2972-crds test-cr' +Oct 27 14:32:43.144: INFO: stderr: "" +Oct 27 14:32:43.144: INFO: stdout: "e2e-test-crd-publish-openapi-2972-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" +STEP: kubectl explain works to explain CR without validation schema +Oct 27 14:32:43.144: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-8687 explain e2e-test-crd-publish-openapi-2972-crds' +Oct 27 14:32:43.309: INFO: stderr: "" +Oct 27 14:32:43.309: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-2972-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:32:46.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-8687" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":346,"completed":112,"skipped":1985,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:32:46.837: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-3551 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name projected-configmap-test-volume-84e25f4f-5b6d-45a0-b8e5-f7c676788548 +STEP: Creating a pod to test consume configMaps +Oct 27 14:32:47.005: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-44eee952-7a63-434f-918d-48b1465e38b8" in namespace "projected-3551" to be "Succeeded or Failed" +Oct 27 14:32:47.010: INFO: Pod "pod-projected-configmaps-44eee952-7a63-434f-918d-48b1465e38b8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.97221ms +Oct 27 14:32:49.015: INFO: Pod "pod-projected-configmaps-44eee952-7a63-434f-918d-48b1465e38b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010472661s +STEP: Saw pod success +Oct 27 14:32:49.016: INFO: Pod "pod-projected-configmaps-44eee952-7a63-434f-918d-48b1465e38b8" satisfied condition "Succeeded or Failed" +Oct 27 14:32:49.020: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod pod-projected-configmaps-44eee952-7a63-434f-918d-48b1465e38b8 container agnhost-container: +STEP: delete the pod +Oct 27 14:32:49.045: INFO: Waiting for pod pod-projected-configmaps-44eee952-7a63-434f-918d-48b1465e38b8 to disappear +Oct 27 14:32:49.049: INFO: Pod pod-projected-configmaps-44eee952-7a63-434f-918d-48b1465e38b8 no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:32:49.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-3551" for this suite. +•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":113,"skipped":1999,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition + creating/deleting custom resource definition objects works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:32:49.062: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename custom-resource-definition +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in custom-resource-definition-4524 +STEP: Waiting for a default service account to be provisioned in namespace +[It] creating/deleting custom resource definition objects works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:32:49.218: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:32:50.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "custom-resource-definition-4524" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":346,"completed":114,"skipped":2064,"failed":0} +SSSSSSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:32:50.262: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-5800 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating projection with secret that has name projected-secret-test-5503942f-4b6c-4531-85e3-8881058458c9 +STEP: Creating a pod to test consume secrets +Oct 27 14:32:50.431: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-18f79551-f768-4c76-94c5-28b3bc97e780" in namespace "projected-5800" to be "Succeeded or Failed" +Oct 27 14:32:50.437: INFO: Pod "pod-projected-secrets-18f79551-f768-4c76-94c5-28b3bc97e780": Phase="Pending", Reason="", readiness=false. Elapsed: 6.928801ms +Oct 27 14:32:52.443: INFO: Pod "pod-projected-secrets-18f79551-f768-4c76-94c5-28b3bc97e780": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012584099s +STEP: Saw pod success +Oct 27 14:32:52.443: INFO: Pod "pod-projected-secrets-18f79551-f768-4c76-94c5-28b3bc97e780" satisfied condition "Succeeded or Failed" +Oct 27 14:32:52.448: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod pod-projected-secrets-18f79551-f768-4c76-94c5-28b3bc97e780 container projected-secret-volume-test: +STEP: delete the pod +Oct 27 14:32:52.467: INFO: Waiting for pod pod-projected-secrets-18f79551-f768-4c76-94c5-28b3bc97e780 to disappear +Oct 27 14:32:52.472: INFO: Pod pod-projected-secrets-18f79551-f768-4c76-94c5-28b3bc97e780 no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:32:52.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-5800" for this suite. +•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":346,"completed":115,"skipped":2073,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:32:52.485: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-7817 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 27 14:32:52.644: INFO: Waiting up to 5m0s for pod "downwardapi-volume-64201475-659c-4a8b-a47c-226a225f72d3" in namespace "projected-7817" to be "Succeeded or Failed" +Oct 27 14:32:52.649: INFO: Pod "downwardapi-volume-64201475-659c-4a8b-a47c-226a225f72d3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.688041ms +Oct 27 14:32:54.655: INFO: Pod "downwardapi-volume-64201475-659c-4a8b-a47c-226a225f72d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010482847s +STEP: Saw pod success +Oct 27 14:32:54.655: INFO: Pod "downwardapi-volume-64201475-659c-4a8b-a47c-226a225f72d3" satisfied condition "Succeeded or Failed" +Oct 27 14:32:54.659: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod downwardapi-volume-64201475-659c-4a8b-a47c-226a225f72d3 container client-container: +STEP: delete the pod +Oct 27 14:32:54.679: INFO: Waiting for pod downwardapi-volume-64201475-659c-4a8b-a47c-226a225f72d3 to disappear +Oct 27 14:32:54.683: INFO: Pod downwardapi-volume-64201475-659c-4a8b-a47c-226a225f72d3 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:32:54.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-7817" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":116,"skipped":2100,"failed":0} +SSSSSSS +------------------------------ +[sig-network] Networking Granular Checks: Pods + should function for intra-pod communication: udp [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Networking + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:32:54.696: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename pod-network-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pod-network-test-8578 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should function for intra-pod communication: udp [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Performing setup for networking test in namespace pod-network-test-8578 +STEP: creating a selector +STEP: Creating the service pods in kubernetes +Oct 27 14:32:54.843: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +Oct 27 14:32:54.881: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:32:56.886: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:32:58.888: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:33:00.886: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:33:02.887: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:33:04.887: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:33:06.887: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:33:08.886: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:33:10.887: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:33:12.888: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:33:14.891: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:33:16.888: INFO: The status of Pod netserver-0 is Running (Ready = true) +Oct 27 14:33:16.897: INFO: The status of Pod netserver-1 is Running (Ready = true) +STEP: Creating test pods +Oct 27 14:33:18.929: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 +Oct 27 14:33:18.929: INFO: Breadth first check of 172.16.0.48 on host 10.250.8.34... +Oct 27 14:33:18.934: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://172.16.1.128:9080/dial?request=hostname&protocol=udp&host=172.16.0.48&port=8081&tries=1'] Namespace:pod-network-test-8578 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 14:33:18.934: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 14:33:19.137: INFO: Waiting for responses: map[] +Oct 27 14:33:19.137: INFO: reached 172.16.0.48 after 0/1 tries +Oct 27 14:33:19.137: INFO: Breadth first check of 172.16.1.127 on host 10.250.8.35... +Oct 27 14:33:19.142: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://172.16.1.128:9080/dial?request=hostname&protocol=udp&host=172.16.1.127&port=8081&tries=1'] Namespace:pod-network-test-8578 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 14:33:19.142: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 14:33:19.401: INFO: Waiting for responses: map[] +Oct 27 14:33:19.401: INFO: reached 172.16.1.127 after 0/1 tries +Oct 27 14:33:19.401: INFO: Going to retry 0 out of 2 pods.... +[AfterEach] [sig-network] Networking + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:33:19.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pod-network-test-8578" for this suite. +•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":346,"completed":117,"skipped":2107,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Networking Granular Checks: Pods + should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Networking + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:33:19.415: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename pod-network-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pod-network-test-3083 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Performing setup for networking test in namespace pod-network-test-3083 +STEP: creating a selector +STEP: Creating the service pods in kubernetes +Oct 27 14:33:19.564: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +Oct 27 14:33:19.603: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:33:21.609: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:33:23.611: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:33:25.610: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:33:27.610: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:33:29.610: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:33:31.610: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:33:33.611: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:33:35.610: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:33:37.611: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:33:39.614: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:33:41.611: INFO: The status of Pod netserver-0 is Running (Ready = true) +Oct 27 14:33:41.620: INFO: The status of Pod netserver-1 is Running (Ready = true) +STEP: Creating test pods +Oct 27 14:33:43.669: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 +Oct 27 14:33:43.669: INFO: Going to poll 172.16.0.49 on port 8081 at least 0 times, with a maximum of 34 tries before failing +Oct 27 14:33:43.674: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.16.0.49 8081 | grep -v '^\s*$'] Namespace:pod-network-test-3083 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 14:33:43.674: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 14:33:44.949: INFO: Found all 1 expected endpoints: [netserver-0] +Oct 27 14:33:44.949: INFO: Going to poll 172.16.1.129 on port 8081 at least 0 times, with a maximum of 34 tries before failing +Oct 27 14:33:44.954: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.16.1.129 8081 | grep -v '^\s*$'] Namespace:pod-network-test-3083 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 14:33:44.954: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 14:33:46.191: INFO: Found all 1 expected endpoints: [netserver-1] +[AfterEach] [sig-network] Networking + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:33:46.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pod-network-test-3083" for this suite. +•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":118,"skipped":2164,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Deployment + deployment should support rollover [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:33:46.208: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename deployment +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in deployment-8321 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 +[It] deployment should support rollover [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:33:46.366: INFO: Pod name rollover-pod: Found 0 pods out of 1 +Oct 27 14:33:51.372: INFO: Pod name rollover-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running +Oct 27 14:33:51.372: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready +Oct 27 14:33:53.378: INFO: Creating deployment "test-rollover-deployment" +Oct 27 14:33:53.390: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations +Oct 27 14:33:55.399: INFO: Check revision of new replica set for deployment "test-rollover-deployment" +Oct 27 14:33:55.408: INFO: Ensure that both replica sets have 1 created replica +Oct 27 14:33:55.418: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update +Oct 27 14:33:55.428: INFO: Updating deployment test-rollover-deployment +Oct 27 14:33:55.428: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller +Oct 27 14:33:57.439: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 +Oct 27 14:33:57.448: INFO: Make sure deployment "test-rollover-deployment" is complete +Oct 27 14:33:57.458: INFO: all replica sets need to contain the pod-template-hash label +Oct 27 14:33:57.458: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942033, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942033, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942036, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942033, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 27 14:33:59.469: INFO: all replica sets need to contain the pod-template-hash label +Oct 27 14:33:59.469: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942033, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942033, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942036, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942033, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 27 14:34:01.470: INFO: all replica sets need to contain the pod-template-hash label +Oct 27 14:34:01.470: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942033, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942033, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942036, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942033, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 27 14:34:03.469: INFO: all replica sets need to contain the pod-template-hash label +Oct 27 14:34:03.469: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942033, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942033, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942036, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942033, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 27 14:34:05.469: INFO: all replica sets need to contain the pod-template-hash label +Oct 27 14:34:05.469: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942033, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942033, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942036, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942033, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 27 14:34:07.470: INFO: +Oct 27 14:34:07.470: INFO: Ensure that both old replica sets have no replicas +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 +Oct 27 14:34:07.483: INFO: Deployment "test-rollover-deployment": +&Deployment{ObjectMeta:{test-rollover-deployment deployment-8321 98f86622-57a3-4daf-bcad-e4b05de4bca2 17784 2 2021-10-27 14:33:53 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-10-27 14:33:55 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 14:34:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc006164cb8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-10-27 14:33:53 +0000 UTC,LastTransitionTime:2021-10-27 14:33:53 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-98c5f4599" has successfully progressed.,LastUpdateTime:2021-10-27 14:34:06 +0000 UTC,LastTransitionTime:2021-10-27 14:33:53 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} + +Oct 27 14:34:07.489: INFO: New ReplicaSet "test-rollover-deployment-98c5f4599" of Deployment "test-rollover-deployment": +&ReplicaSet{ObjectMeta:{test-rollover-deployment-98c5f4599 deployment-8321 785251d5-efdf-48a9-87cc-5a25aef532ed 17777 2 2021-10-27 14:33:55 +0000 UTC map[name:rollover-pod pod-template-hash:98c5f4599] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 98f86622-57a3-4daf-bcad-e4b05de4bca2 0xc006165380 0xc006165381}] [] [{kube-controller-manager Update apps/v1 2021-10-27 14:33:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98f86622-57a3-4daf-bcad-e4b05de4bca2\"}":{}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 14:34:06 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 98c5f4599,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:98c5f4599] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc006165428 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} +Oct 27 14:34:07.489: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": +Oct 27 14:34:07.489: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-8321 9945d36f-c438-497f-8f5f-e692ef07970d 17783 2 2021-10-27 14:33:46 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 98f86622-57a3-4daf-bcad-e4b05de4bca2 0xc0061650d7 0xc0061650d8}] [] [{e2e.test Update apps/v1 2021-10-27 14:33:46 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 14:34:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98f86622-57a3-4daf-bcad-e4b05de4bca2\"}":{}}},"f:spec":{"f:replicas":{}}} } {kube-controller-manager Update apps/v1 2021-10-27 14:34:06 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0061651a8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Oct 27 14:34:07.489: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-78bc8b888c deployment-8321 935cb519-dd0b-4299-bb3e-3b3085cb0e30 17713 2 2021-10-27 14:33:53 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 98f86622-57a3-4daf-bcad-e4b05de4bca2 0xc006165227 0xc006165228}] [] [{kube-controller-manager Update apps/v1 2021-10-27 14:33:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98f86622-57a3-4daf-bcad-e4b05de4bca2\"}":{}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 14:33:55 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78bc8b888c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0061652f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Oct 27 14:34:07.495: INFO: Pod "test-rollover-deployment-98c5f4599-pr96j" is available: +&Pod{ObjectMeta:{test-rollover-deployment-98c5f4599-pr96j test-rollover-deployment-98c5f4599- deployment-8321 b552faa2-4b8f-4e26-b5d8-d5dbd4881a02 17731 0 2021-10-27 14:33:55 +0000 UTC map[name:rollover-pod pod-template-hash:98c5f4599] map[cni.projectcalico.org/containerID:b9f2546c315d0b4e910cf1b612d6434200b4c3fd199dc224209b0a52cbf3c712 cni.projectcalico.org/podIP:172.16.1.133/32 cni.projectcalico.org/podIPs:172.16.1.133/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet test-rollover-deployment-98c5f4599 785251d5-efdf-48a9-87cc-5a25aef532ed 0xc006165a00 0xc006165a01}] [] [{calico Update v1 2021-10-27 14:33:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kube-controller-manager Update v1 2021-10-27 14:33:55 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"785251d5-efdf-48a9-87cc-5a25aef532ed\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 14:33:56 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.16.1.133\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-vgfqf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmanu-jzf.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vgfqf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:izgw89f23rpcwrl79tpgp1z,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:33:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:33:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:33:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:33:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.8.35,PodIP:172.16.1.133,StartTime:2021-10-27 14:33:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-27 14:33:56 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:containerd://05fee2b6adf03cf957327b3e1bd483981129714b67325018ac72eaa275392ef9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.16.1.133,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:34:07.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-8321" for this suite. +•{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":346,"completed":119,"skipped":2194,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:34:07.508: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-7460 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 27 14:34:07.672: INFO: Waiting up to 5m0s for pod "downwardapi-volume-afe53454-e608-4138-9d27-c652efe92411" in namespace "downward-api-7460" to be "Succeeded or Failed" +Oct 27 14:34:07.677: INFO: Pod "downwardapi-volume-afe53454-e608-4138-9d27-c652efe92411": Phase="Pending", Reason="", readiness=false. Elapsed: 5.046106ms +Oct 27 14:34:09.682: INFO: Pod "downwardapi-volume-afe53454-e608-4138-9d27-c652efe92411": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010119641s +STEP: Saw pod success +Oct 27 14:34:09.682: INFO: Pod "downwardapi-volume-afe53454-e608-4138-9d27-c652efe92411" satisfied condition "Succeeded or Failed" +Oct 27 14:34:09.687: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod downwardapi-volume-afe53454-e608-4138-9d27-c652efe92411 container client-container: +STEP: delete the pod +Oct 27 14:34:09.750: INFO: Waiting for pod downwardapi-volume-afe53454-e608-4138-9d27-c652efe92411 to disappear +Oct 27 14:34:09.754: INFO: Pod downwardapi-volume-afe53454-e608-4138-9d27-c652efe92411 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:34:09.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-7460" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":346,"completed":120,"skipped":2204,"failed":0} +SSSS +------------------------------ +[sig-auth] ServiceAccounts + should mount an API token into pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:34:09.769: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename svcaccounts +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in svcaccounts-4908 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should mount an API token into pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: getting the auto-created API token +STEP: reading a file in the container +Oct 27 14:34:12.456: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl exec --namespace=svcaccounts-4908 pod-service-account-41ad5002-1b01-4fa6-919a-57cbdfcce233 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' +STEP: reading a file in the container +Oct 27 14:34:12.810: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl exec --namespace=svcaccounts-4908 pod-service-account-41ad5002-1b01-4fa6-919a-57cbdfcce233 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' +STEP: reading a file in the container +Oct 27 14:34:13.074: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl exec --namespace=svcaccounts-4908 pod-service-account-41ad5002-1b01-4fa6-919a-57cbdfcce233 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' +[AfterEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:34:13.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svcaccounts-4908" for this suite. +•{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":346,"completed":121,"skipped":2208,"failed":0} +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] DNS + should provide DNS for services [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:34:13.355: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename dns +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-2417 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide DNS for services [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a test headless service +STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2417.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2417.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2417.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2417.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2417.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-2417.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2417.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-2417.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2417.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-2417.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2417.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-2417.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2417.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 144.0.27.172.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/172.27.0.144_udp@PTR;check="$$(dig +tcp +noall +answer +search 144.0.27.172.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/172.27.0.144_tcp@PTR;sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2417.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2417.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2417.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2417.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2417.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-2417.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2417.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-2417.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2417.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-2417.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2417.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-2417.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2417.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 144.0.27.172.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/172.27.0.144_udp@PTR;check="$$(dig +tcp +noall +answer +search 144.0.27.172.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/172.27.0.144_tcp@PTR;sleep 1; done + +STEP: creating a pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Oct 27 14:34:15.609: INFO: Unable to read wheezy_udp@dns-test-service.dns-2417.svc.cluster.local from pod dns-2417/dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909: the server could not find the requested resource (get pods dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909) +Oct 27 14:34:15.619: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2417.svc.cluster.local from pod dns-2417/dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909: the server could not find the requested resource (get pods dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909) +Oct 27 14:34:15.628: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2417.svc.cluster.local from pod dns-2417/dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909: the server could not find the requested resource (get pods dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909) +Oct 27 14:34:15.721: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2417.svc.cluster.local from pod dns-2417/dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909: the server could not find the requested resource (get pods dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909) +Oct 27 14:34:15.778: INFO: Unable to read jessie_udp@dns-test-service.dns-2417.svc.cluster.local from pod dns-2417/dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909: the server could not find the requested resource (get pods dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909) +Oct 27 14:34:15.786: INFO: Unable to read jessie_tcp@dns-test-service.dns-2417.svc.cluster.local from pod dns-2417/dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909: the server could not find the requested resource (get pods dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909) +Oct 27 14:34:15.794: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2417.svc.cluster.local from pod dns-2417/dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909: the server could not find the requested resource (get pods dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909) +Oct 27 14:34:15.802: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2417.svc.cluster.local from pod dns-2417/dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909: the server could not find the requested resource (get pods dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909) +Oct 27 14:34:15.847: INFO: Lookups using dns-2417/dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909 failed for: [wheezy_udp@dns-test-service.dns-2417.svc.cluster.local wheezy_tcp@dns-test-service.dns-2417.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2417.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2417.svc.cluster.local jessie_udp@dns-test-service.dns-2417.svc.cluster.local jessie_tcp@dns-test-service.dns-2417.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2417.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2417.svc.cluster.local] + +Oct 27 14:34:20.857: INFO: Unable to read wheezy_udp@dns-test-service.dns-2417.svc.cluster.local from pod dns-2417/dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909: the server could not find the requested resource (get pods dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909) +Oct 27 14:34:20.909: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2417.svc.cluster.local from pod dns-2417/dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909: the server could not find the requested resource (get pods dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909) +Oct 27 14:34:20.917: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2417.svc.cluster.local from pod dns-2417/dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909: the server could not find the requested resource (get pods dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909) +Oct 27 14:34:20.924: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2417.svc.cluster.local from pod dns-2417/dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909: the server could not find the requested resource (get pods dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909) +Oct 27 14:34:20.977: INFO: Unable to read jessie_udp@dns-test-service.dns-2417.svc.cluster.local from pod dns-2417/dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909: the server could not find the requested resource (get pods dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909) +Oct 27 14:34:20.984: INFO: Unable to read jessie_tcp@dns-test-service.dns-2417.svc.cluster.local from pod dns-2417/dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909: the server could not find the requested resource (get pods dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909) +Oct 27 14:34:20.994: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2417.svc.cluster.local from pod dns-2417/dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909: the server could not find the requested resource (get pods dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909) +Oct 27 14:34:21.002: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2417.svc.cluster.local from pod dns-2417/dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909: the server could not find the requested resource (get pods dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909) +Oct 27 14:34:21.159: INFO: Lookups using dns-2417/dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909 failed for: [wheezy_udp@dns-test-service.dns-2417.svc.cluster.local wheezy_tcp@dns-test-service.dns-2417.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2417.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2417.svc.cluster.local jessie_udp@dns-test-service.dns-2417.svc.cluster.local jessie_tcp@dns-test-service.dns-2417.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2417.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2417.svc.cluster.local] + +Oct 27 14:34:25.859: INFO: Unable to read wheezy_udp@dns-test-service.dns-2417.svc.cluster.local from pod dns-2417/dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909: the server could not find the requested resource (get pods dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909) +Oct 27 14:34:25.902: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2417.svc.cluster.local from pod dns-2417/dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909: the server could not find the requested resource (get pods dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909) +Oct 27 14:34:25.911: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2417.svc.cluster.local from pod dns-2417/dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909: the server could not find the requested resource (get pods dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909) +Oct 27 14:34:25.920: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2417.svc.cluster.local from pod dns-2417/dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909: the server could not find the requested resource (get pods dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909) +Oct 27 14:34:25.974: INFO: Unable to read jessie_udp@dns-test-service.dns-2417.svc.cluster.local from pod dns-2417/dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909: the server could not find the requested resource (get pods dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909) +Oct 27 14:34:25.982: INFO: Unable to read jessie_tcp@dns-test-service.dns-2417.svc.cluster.local from pod dns-2417/dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909: the server could not find the requested resource (get pods dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909) +Oct 27 14:34:25.989: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2417.svc.cluster.local from pod dns-2417/dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909: the server could not find the requested resource (get pods dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909) +Oct 27 14:34:25.996: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2417.svc.cluster.local from pod dns-2417/dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909: the server could not find the requested resource (get pods dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909) +Oct 27 14:34:26.041: INFO: Lookups using dns-2417/dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909 failed for: [wheezy_udp@dns-test-service.dns-2417.svc.cluster.local wheezy_tcp@dns-test-service.dns-2417.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2417.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2417.svc.cluster.local jessie_udp@dns-test-service.dns-2417.svc.cluster.local jessie_tcp@dns-test-service.dns-2417.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2417.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2417.svc.cluster.local] + +Oct 27 14:34:30.856: INFO: Unable to read wheezy_udp@dns-test-service.dns-2417.svc.cluster.local from pod dns-2417/dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909: the server could not find the requested resource (get pods dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909) +Oct 27 14:34:30.865: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2417.svc.cluster.local from pod dns-2417/dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909: the server could not find the requested resource (get pods dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909) +Oct 27 14:34:30.910: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2417.svc.cluster.local from pod dns-2417/dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909: the server could not find the requested resource (get pods dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909) +Oct 27 14:34:30.917: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2417.svc.cluster.local from pod dns-2417/dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909: the server could not find the requested resource (get pods dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909) +Oct 27 14:34:30.973: INFO: Unable to read jessie_udp@dns-test-service.dns-2417.svc.cluster.local from pod dns-2417/dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909: the server could not find the requested resource (get pods dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909) +Oct 27 14:34:30.980: INFO: Unable to read jessie_tcp@dns-test-service.dns-2417.svc.cluster.local from pod dns-2417/dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909: the server could not find the requested resource (get pods dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909) +Oct 27 14:34:30.988: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2417.svc.cluster.local from pod dns-2417/dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909: the server could not find the requested resource (get pods dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909) +Oct 27 14:34:30.996: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2417.svc.cluster.local from pod dns-2417/dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909: the server could not find the requested resource (get pods dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909) +Oct 27 14:34:31.041: INFO: Lookups using dns-2417/dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909 failed for: [wheezy_udp@dns-test-service.dns-2417.svc.cluster.local wheezy_tcp@dns-test-service.dns-2417.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2417.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2417.svc.cluster.local jessie_udp@dns-test-service.dns-2417.svc.cluster.local jessie_tcp@dns-test-service.dns-2417.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2417.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2417.svc.cluster.local] + +Oct 27 14:34:35.856: INFO: Unable to read wheezy_udp@dns-test-service.dns-2417.svc.cluster.local from pod dns-2417/dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909: the server could not find the requested resource (get pods dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909) +Oct 27 14:34:35.870: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2417.svc.cluster.local from pod dns-2417/dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909: the server could not find the requested resource (get pods dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909) +Oct 27 14:34:35.879: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2417.svc.cluster.local from pod dns-2417/dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909: the server could not find the requested resource (get pods dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909) +Oct 27 14:34:35.922: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2417.svc.cluster.local from pod dns-2417/dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909: the server could not find the requested resource (get pods dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909) +Oct 27 14:34:35.977: INFO: Unable to read jessie_udp@dns-test-service.dns-2417.svc.cluster.local from pod dns-2417/dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909: the server could not find the requested resource (get pods dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909) +Oct 27 14:34:35.985: INFO: Unable to read jessie_tcp@dns-test-service.dns-2417.svc.cluster.local from pod dns-2417/dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909: the server could not find the requested resource (get pods dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909) +Oct 27 14:34:35.993: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2417.svc.cluster.local from pod dns-2417/dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909: the server could not find the requested resource (get pods dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909) +Oct 27 14:34:36.001: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2417.svc.cluster.local from pod dns-2417/dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909: the server could not find the requested resource (get pods dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909) +Oct 27 14:34:36.046: INFO: Lookups using dns-2417/dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909 failed for: [wheezy_udp@dns-test-service.dns-2417.svc.cluster.local wheezy_tcp@dns-test-service.dns-2417.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2417.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2417.svc.cluster.local jessie_udp@dns-test-service.dns-2417.svc.cluster.local jessie_tcp@dns-test-service.dns-2417.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2417.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2417.svc.cluster.local] + +Oct 27 14:34:40.857: INFO: Unable to read wheezy_udp@dns-test-service.dns-2417.svc.cluster.local from pod dns-2417/dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909: the server could not find the requested resource (get pods dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909) +Oct 27 14:34:40.865: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2417.svc.cluster.local from pod dns-2417/dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909: the server could not find the requested resource (get pods dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909) +Oct 27 14:34:40.874: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2417.svc.cluster.local from pod dns-2417/dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909: the server could not find the requested resource (get pods dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909) +Oct 27 14:34:40.918: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2417.svc.cluster.local from pod dns-2417/dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909: the server could not find the requested resource (get pods dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909) +Oct 27 14:34:40.974: INFO: Unable to read jessie_udp@dns-test-service.dns-2417.svc.cluster.local from pod dns-2417/dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909: the server could not find the requested resource (get pods dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909) +Oct 27 14:34:40.981: INFO: Unable to read jessie_tcp@dns-test-service.dns-2417.svc.cluster.local from pod dns-2417/dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909: the server could not find the requested resource (get pods dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909) +Oct 27 14:34:40.989: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2417.svc.cluster.local from pod dns-2417/dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909: the server could not find the requested resource (get pods dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909) +Oct 27 14:34:40.996: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2417.svc.cluster.local from pod dns-2417/dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909: the server could not find the requested resource (get pods dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909) +Oct 27 14:34:41.086: INFO: Lookups using dns-2417/dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909 failed for: [wheezy_udp@dns-test-service.dns-2417.svc.cluster.local wheezy_tcp@dns-test-service.dns-2417.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2417.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2417.svc.cluster.local jessie_udp@dns-test-service.dns-2417.svc.cluster.local jessie_tcp@dns-test-service.dns-2417.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2417.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2417.svc.cluster.local] + +Oct 27 14:34:46.039: INFO: DNS probes using dns-2417/dns-test-186e80cd-da4e-46f1-9d1a-a25c71c01909 succeeded + +STEP: deleting the pod +STEP: deleting the test service +STEP: deleting the test headless service +[AfterEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:34:46.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-2417" for this suite. +•{"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":346,"completed":122,"skipped":2227,"failed":0} +SSSSS +------------------------------ +[sig-cli] Kubectl client Proxy server + should support proxy with --port 0 [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:34:46.083: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-7187 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should support proxy with --port 0 [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: starting the proxy server +Oct 27 14:34:46.245: INFO: Asynchronously running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-7187 proxy -p 0 --disable-filter' +STEP: curling proxy /api/ output +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:34:46.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-7187" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":346,"completed":123,"skipped":2232,"failed":0} +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl describe + should check if kubectl describe prints relevant information for rc and pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:34:46.329: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-8974 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should check if kubectl describe prints relevant information for rc and pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:34:46.479: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-8974 create -f -' +Oct 27 14:34:46.694: INFO: stderr: "" +Oct 27 14:34:46.694: INFO: stdout: "replicationcontroller/agnhost-primary created\n" +Oct 27 14:34:46.694: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-8974 create -f -' +Oct 27 14:34:46.886: INFO: stderr: "" +Oct 27 14:34:46.886: INFO: stdout: "service/agnhost-primary created\n" +STEP: Waiting for Agnhost primary to start. +Oct 27 14:34:47.892: INFO: Selector matched 1 pods for map[app:agnhost] +Oct 27 14:34:47.892: INFO: Found 1 / 1 +Oct 27 14:34:47.892: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 +Oct 27 14:34:47.897: INFO: Selector matched 1 pods for map[app:agnhost] +Oct 27 14:34:47.897: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +Oct 27 14:34:47.897: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-8974 describe pod agnhost-primary-4rckw' +Oct 27 14:34:47.986: INFO: stderr: "" +Oct 27 14:34:47.986: INFO: stdout: "Name: agnhost-primary-4rckw\nNamespace: kubectl-8974\nPriority: 0\nNode: izgw89f23rpcwrl79tpgp1z/10.250.8.35\nStart Time: Wed, 27 Oct 2021 14:34:46 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: cni.projectcalico.org/containerID: 3ae15dd4c2f7c199afed9885680e3010f6662c91b8142a68935c53c3b9fbe99d\n cni.projectcalico.org/podIP: 172.16.1.137/32\n cni.projectcalico.org/podIPs: 172.16.1.137/32\n kubernetes.io/psp: e2e-test-privileged-psp\nStatus: Running\nIP: 172.16.1.137\nIPs:\n IP: 172.16.1.137\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: containerd://265858736b67dca7f2df715baa59a080917d0a7f1a76dfec7ea57b1cf0a4917b\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.32\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Wed, 27 Oct 2021 14:34:47 +0000\n Ready: True\n Restart Count: 0\n Environment:\n KUBERNETES_SERVICE_HOST: api.tmanu-jzf.it.internal.staging.k8s.ondemand.com\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pjhps (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-pjhps:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 1s default-scheduler Successfully assigned kubectl-8974/agnhost-primary-4rckw to izgw89f23rpcwrl79tpgp1z\n Normal Pulled 0s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\" already present on machine\n Normal Created 0s kubelet Created container agnhost-primary\n Normal Started 0s kubelet Started container agnhost-primary\n" +Oct 27 14:34:47.986: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-8974 describe rc agnhost-primary' +Oct 27 14:34:48.088: INFO: stderr: "" +Oct 27 14:34:48.088: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-8974\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.32\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 2s replication-controller Created pod: agnhost-primary-4rckw\n" +Oct 27 14:34:48.088: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-8974 describe service agnhost-primary' +Oct 27 14:34:48.180: INFO: stderr: "" +Oct 27 14:34:48.180: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-8974\nLabels: app=agnhost\n role=primary\nAnnotations: \nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 172.26.248.240\nIPs: 172.26.248.240\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 172.16.1.137:6379\nSession Affinity: None\nEvents: \n" +Oct 27 14:34:48.187: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-8974 describe node izgw81stpxs0bun38i01tfz' +Oct 27 14:34:48.317: INFO: stderr: "" +Oct 27 14:34:48.317: INFO: stdout: "Name: izgw81stpxs0bun38i01tfz\nRoles: \nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/instance-type=ecs.t5-c1m2.large\n beta.kubernetes.io/os=linux\n failure-domain.beta.kubernetes.io/region=eu-central-1\n failure-domain.beta.kubernetes.io/zone=eu-central-1b\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=izgw81stpxs0bun38i01tfz\n kubernetes.io/os=linux\n node.kubernetes.io/instance-type=ecs.t5-c1m2.large\n node.kubernetes.io/role=node\n topology.diskplugin.csi.alibabacloud.com/zone=eu-central-1b\n topology.kubernetes.io/region=eu-central-1\n topology.kubernetes.io/zone=eu-central-1b\n worker.garden.sapcloud.io/group=worker-1\n worker.gardener.cloud/cri-name=containerd\n worker.gardener.cloud/pool=worker-1\n worker.gardener.cloud/system-components=true\nAnnotations: csi.volume.kubernetes.io/nodeid: {\"diskplugin.csi.alibabacloud.com\":\"i-gw81stpxs0bun38i01tf\"}\n node.alpha.kubernetes.io/ttl: 0\n node.machine.sapcloud.io/last-applied-anno-labels-taints:\n {\"metadata\":{\"creationTimestamp\":null,\"labels\":{\"node.kubernetes.io/role\":\"node\",\"worker.garden.sapcloud.io/group\":\"worker-1\",\"worker.gard...\n projectcalico.org/IPv4Address: 10.250.8.34/19\n projectcalico.org/IPv4IPIPTunnelAddr: 172.16.0.1\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Wed, 27 Oct 2021 13:52:33 +0000\nTaints: \nUnschedulable: false\nLease:\n HolderIdentity: izgw81stpxs0bun38i01tfz\n AcquireTime: \n RenewTime: Wed, 27 Oct 2021 14:34:47 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n ReadonlyFilesystem False Wed, 27 Oct 2021 14:29:48 +0000 Wed, 27 Oct 2021 14:19:45 +0000 FilesystemIsNotReadOnly Filesystem is not read-only\n CorruptDockerOverlay2 False Wed, 27 Oct 2021 14:29:48 +0000 Wed, 27 Oct 2021 14:19:45 +0000 NoCorruptDockerOverlay2 docker overlay2 is functioning properly\n FrequentUnregisterNetDevice False Wed, 27 Oct 2021 14:29:48 +0000 Wed, 27 Oct 2021 14:19:46 +0000 NoFrequentUnregisterNetDevice node is functioning properly\n FrequentKubeletRestart False Wed, 27 Oct 2021 14:29:48 +0000 Wed, 27 Oct 2021 14:19:45 +0000 NoFrequentKubeletRestart kubelet is functioning properly\n FrequentDockerRestart False Wed, 27 Oct 2021 14:29:48 +0000 Wed, 27 Oct 2021 14:19:45 +0000 NoFrequentDockerRestart docker is functioning properly\n FrequentContainerdRestart False Wed, 27 Oct 2021 14:29:48 +0000 Wed, 27 Oct 2021 14:19:45 +0000 NoFrequentContainerdRestart containerd is functioning properly\n KernelDeadlock False Wed, 27 Oct 2021 14:29:48 +0000 Wed, 27 Oct 2021 14:19:45 +0000 KernelHasNoDeadlock kernel has no deadlock\n NetworkUnavailable False Wed, 27 Oct 2021 13:55:56 +0000 Wed, 27 Oct 2021 13:55:56 +0000 CalicoIsUp Calico is running on this node\n MemoryPressure False Wed, 27 Oct 2021 14:34:45 +0000 Wed, 27 Oct 2021 13:52:33 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Wed, 27 Oct 2021 14:34:45 +0000 Wed, 27 Oct 2021 13:52:33 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Wed, 27 Oct 2021 14:34:45 +0000 Wed, 27 Oct 2021 13:52:33 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Wed, 27 Oct 2021 14:34:45 +0000 Wed, 27 Oct 2021 13:52:53 +0000 KubeletReady kubelet is posting ready status. AppArmor enabled\nAddresses:\n InternalIP: 10.250.8.34\n Hostname: izgw81stpxs0bun38i01tfz\nCapacity:\n cpu: 2\n ephemeral-storage: 35067020Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 4035676Ki\n pods: 110\nAllocatable:\n cpu: 1920m\n ephemeral-storage: 34113197030\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 2884700Ki\n pods: 110\nSystem Info:\n Machine ID: 39c7a985ffaa4f3a8c546209c6a8ce18\n System UUID: 39c7a985-ffaa-4f3a-8c54-6209c6a8ce18\n Boot ID: 2411dd88-c0e0-46a8-8529-a417d684aa0a\n Kernel Version: 5.4.0-7-cloud-amd64\n OS Image: Garden Linux 318.9\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.4.4\n Kubelet Version: v1.22.2\n Kube-Proxy Version: v1.22.2\nPodCIDR: 172.16.0.0/24\nPodCIDRs: 172.16.0.0/24\nProviderID: eu-central-1.i-gw81stpxs0bun38i01tf\nNon-terminated Pods: (19 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system addons-nginx-ingress-controller-59fb958d58-lftrg 100m (5%) 400m (20%) 128Mi (4%) 512Mi (18%) 22m\n kube-system addons-nginx-ingress-nginx-ingress-k8s-backend-56d9d84c8c-kbm9x 0 (0%) 0 (0%) 0 (0%) 0 (0%) 22m\n kube-system apiserver-proxy-k22hx 40m (2%) 400m (20%) 40Mi (1%) 500Mi (17%) 42m\n kube-system calico-kube-controllers-56bcbfb5c5-dr6cw 10m (0%) 50m (2%) 50Mi (1%) 100Mi (3%) 43m\n kube-system calico-node-bn6rh 250m (13%) 800m (41%) 100Mi (3%) 700Mi (24%) 38m\n kube-system calico-node-vertical-autoscaler-785b5f968-t4xfd 10m (0%) 10m (0%) 50Mi (1%) 50Mi (1%) 43m\n kube-system calico-typha-deploy-546b97d4b5-4h5cp 200m (10%) 500m (26%) 100Mi (3%) 700Mi (24%) 43m\n kube-system calico-typha-horizontal-autoscaler-5b58bb446c-sfqph 10m (0%) 10m (0%) 50Mi (1%) 50Mi (1%) 43m\n kube-system calico-typha-vertical-autoscaler-5c9655cddd-9fp9m 10m (0%) 10m (0%) 50Mi (1%) 50Mi (1%) 43m\n kube-system coredns-74d494ccd9-b4xr9 50m (2%) 250m (13%) 15Mi (0%) 500Mi (17%) 22m\n kube-system coredns-74d494ccd9-tk5m9 50m (2%) 250m (13%) 15Mi (0%) 500Mi (17%) 43m\n kube-system csi-disk-plugin-alicloud-zkfgk 40m (2%) 110m (5%) 114Mi (4%) 180Mi (6%) 42m\n kube-system kube-proxy-x6l7r 34m (1%) 92m (4%) 47753748 (1%) 145014992 (4%) 39m\n kube-system metrics-server-5d4664d665-hnljs 50m (2%) 500m (26%) 150Mi (5%) 1Gi (36%) 22m\n kube-system node-exporter-dh57q 50m (2%) 150m (7%) 50Mi (1%) 150Mi (5%) 42m\n kube-system node-problem-detector-wm6mk 11m (0%) 44m (2%) 23574998 (0%) 94299992 (3%) 15m\n kube-system vpn-shoot-78f675c9df-gzflt 11m (0%) 44m (2%) 11500k (0%) 46M (1%) 22m\n kubernetes-dashboard dashboard-metrics-scraper-7ccbfc448f-4l6g7 0 (0%) 0 (0%) 0 (0%) 0 (0%) 43m\n kubernetes-dashboard kubernetes-dashboard-6cc9c75584-c47x8 50m (2%) 200m (10%) 50Mi (1%) 200Mi (7%) 43m\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 976m (50%) 3820m (198%)\n memory 1091558858 (36%) 5754687400 (194%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Starting 42m kubelet Starting kubelet.\n Warning InvalidDiskCapacity 42m kubelet invalid capacity 0 on image filesystem\n Normal NodeHasSufficientMemory 42m (x2 over 42m) kubelet Node izgw81stpxs0bun38i01tfz status is now: NodeHasSufficientMemory\n Normal NodeHasNoDiskPressure 42m (x2 over 42m) kubelet Node izgw81stpxs0bun38i01tfz status is now: NodeHasNoDiskPressure\n Normal NodeHasSufficientPID 42m (x2 over 42m) kubelet Node izgw81stpxs0bun38i01tfz status is now: NodeHasSufficientPID\n Normal NodeAllocatableEnforced 42m kubelet Updated Node Allocatable limit across pods\n Normal NodeReady 41m kubelet Node izgw81stpxs0bun38i01tfz status is now: NodeReady\n Warning ContainerdStart 41m (x2 over 41m) systemd-monitor Starting containerd container runtime...\n Warning DockerStart 41m (x2 over 41m) systemd-monitor Starting Docker Application Container Engine...\n Warning ContainerdStart 15m systemd-monitor Starting containerd container runtime...\n Warning DockerStart 15m systemd-monitor Starting Docker Application Container Engine...\n" +Oct 27 14:34:48.317: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-8974 describe namespace kubectl-8974' +Oct 27 14:34:48.407: INFO: stderr: "" +Oct 27 14:34:48.407: INFO: stdout: "Name: kubectl-8974\nLabels: e2e-framework=kubectl\n e2e-run=33663709-29f8-4e40-9066-22fcaa6d2004\n kubernetes.io/metadata.name=kubectl-8974\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:34:48.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-8974" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":346,"completed":124,"skipped":2251,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-node] ConfigMap + should fail to create ConfigMap with empty key [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:34:48.420: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-1697 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should fail to create ConfigMap with empty key [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap that has name configmap-test-emptyKey-ef810c2b-8e42-4d1d-9523-9b8fd3d6491c +[AfterEach] [sig-node] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:34:48.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-1697" for this suite. +•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":346,"completed":125,"skipped":2261,"failed":0} +SS +------------------------------ +[sig-apps] ReplicaSet + should validate Replicaset Status endpoints [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:34:48.651: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename replicaset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replicaset-9401 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should validate Replicaset Status endpoints [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Create a Replicaset +STEP: Verify that the required pods have come up. +Oct 27 14:34:48.814: INFO: Pod name sample-pod: Found 0 pods out of 1 +Oct 27 14:34:53.819: INFO: Pod name sample-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running +STEP: Getting /status +Oct 27 14:34:53.824: INFO: Replicaset test-rs has Conditions: [] +STEP: updating the Replicaset Status +Oct 27 14:34:53.834: INFO: updatedStatus.Conditions: []v1.ReplicaSetCondition{v1.ReplicaSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"E2E", Message:"Set from e2e test"}} +STEP: watching for the ReplicaSet status to be updated +Oct 27 14:34:53.838: INFO: Observed &ReplicaSet event: ADDED +Oct 27 14:34:53.838: INFO: Observed &ReplicaSet event: MODIFIED +Oct 27 14:34:53.838: INFO: Observed &ReplicaSet event: MODIFIED +Oct 27 14:34:53.838: INFO: Observed &ReplicaSet event: MODIFIED +Oct 27 14:34:53.838: INFO: Found replicaset test-rs in namespace replicaset-9401 with labels: map[name:sample-pod pod:httpd] annotations: map[] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] +Oct 27 14:34:53.838: INFO: Replicaset test-rs has an updated status +STEP: patching the Replicaset Status +Oct 27 14:34:53.839: INFO: Patch payload: {"status":{"conditions":[{"type":"StatusPatched","status":"True"}]}} +Oct 27 14:34:53.868: INFO: Patched status conditions: []v1.ReplicaSetCondition{v1.ReplicaSetCondition{Type:"StatusPatched", Status:"True", LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"", Message:""}} +STEP: watching for the Replicaset status to be patched +Oct 27 14:34:53.871: INFO: Observed &ReplicaSet event: ADDED +Oct 27 14:34:53.872: INFO: Observed &ReplicaSet event: MODIFIED +Oct 27 14:34:53.872: INFO: Observed &ReplicaSet event: MODIFIED +Oct 27 14:34:53.872: INFO: Observed &ReplicaSet event: MODIFIED +Oct 27 14:34:53.872: INFO: Observed replicaset test-rs in namespace replicaset-9401 with annotations: map[] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} +Oct 27 14:34:53.872: INFO: Observed &ReplicaSet event: MODIFIED +Oct 27 14:34:53.872: INFO: Found replicaset test-rs in namespace replicaset-9401 with labels: map[name:sample-pod pod:httpd] annotations: map[] & Conditions: {StatusPatched True 0001-01-01 00:00:00 +0000 UTC } +Oct 27 14:34:53.872: INFO: Replicaset test-rs has a patched status +[AfterEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:34:53.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replicaset-9401" for this suite. +•{"msg":"PASSED [sig-apps] ReplicaSet should validate Replicaset Status endpoints [Conformance]","total":346,"completed":126,"skipped":2263,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:34:53.886: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-3370 +STEP: Waiting for a default service account to be provisioned in namespace +[It] optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name cm-test-opt-del-00b3709d-782f-43b2-9201-76a75245a346 +STEP: Creating configMap with name cm-test-opt-upd-2a800150-5722-4862-85e8-8368421cc201 +STEP: Creating the pod +Oct 27 14:34:54.064: INFO: The status of Pod pod-projected-configmaps-6aa0367e-7fed-4029-be26-12c4e6e3a67b is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:34:56.070: INFO: The status of Pod pod-projected-configmaps-6aa0367e-7fed-4029-be26-12c4e6e3a67b is Running (Ready = true) +STEP: Deleting configmap cm-test-opt-del-00b3709d-782f-43b2-9201-76a75245a346 +STEP: Updating configmap cm-test-opt-upd-2a800150-5722-4862-85e8-8368421cc201 +STEP: Creating configMap with name cm-test-opt-create-2470f6d5-06aa-43aa-ab47-bb8a6f7ca3a3 +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:34:58.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-3370" for this suite. +•{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":127,"skipped":2276,"failed":0} +SS +------------------------------ +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + custom resource defaulting for requests and from storage works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:34:58.297: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename custom-resource-definition +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in custom-resource-definition-7622 +STEP: Waiting for a default service account to be provisioned in namespace +[It] custom resource defaulting for requests and from storage works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:34:58.451: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:35:01.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "custom-resource-definition-7622" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":346,"completed":128,"skipped":2278,"failed":0} +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] PreStop + should call prestop when killing a pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] PreStop + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:35:01.673: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename prestop +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in prestop-2699 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] PreStop + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:157 +[It] should call prestop when killing a pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating server pod server in namespace prestop-2699 +STEP: Waiting for pods to come up. +STEP: Creating tester pod tester in namespace prestop-2699 +STEP: Deleting pre-stop pod +Oct 27 14:35:10.961: INFO: Saw: { + "Hostname": "server", + "Sent": null, + "Received": { + "prestop": 1 + }, + "Errors": null, + "Log": [ + "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", + "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." + ], + "StillContactingPeers": true +} +STEP: Deleting the server pod +[AfterEach] [sig-node] PreStop + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:35:10.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "prestop-2699" for this suite. +•{"msg":"PASSED [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":346,"completed":129,"skipped":2295,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Container Runtime blackbox test on terminated container + should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:35:10.986: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-runtime +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-9834 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the container +STEP: wait for the container to reach Failed +STEP: get the container status +STEP: the container should be terminated +STEP: the termination message should be set +Oct 27 14:35:13.170: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- +STEP: delete the container +[AfterEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:35:13.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-runtime-9834" for this suite. +•{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":346,"completed":130,"skipped":2348,"failed":0} +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Secrets + should be consumable from pods in env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:35:13.195: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-1303 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating secret with name secret-test-c2337765-aad6-40c8-bd0d-4d7fcaebce7a +STEP: Creating a pod to test consume secrets +Oct 27 14:35:13.363: INFO: Waiting up to 5m0s for pod "pod-secrets-203451d8-b1f7-4664-a34c-ea2ca790c373" in namespace "secrets-1303" to be "Succeeded or Failed" +Oct 27 14:35:13.368: INFO: Pod "pod-secrets-203451d8-b1f7-4664-a34c-ea2ca790c373": Phase="Pending", Reason="", readiness=false. Elapsed: 4.990581ms +Oct 27 14:35:15.374: INFO: Pod "pod-secrets-203451d8-b1f7-4664-a34c-ea2ca790c373": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.01141255s +STEP: Saw pod success +Oct 27 14:35:15.375: INFO: Pod "pod-secrets-203451d8-b1f7-4664-a34c-ea2ca790c373" satisfied condition "Succeeded or Failed" +Oct 27 14:35:15.385: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod pod-secrets-203451d8-b1f7-4664-a34c-ea2ca790c373 container secret-env-test: +STEP: delete the pod +Oct 27 14:35:15.404: INFO: Waiting for pod pod-secrets-203451d8-b1f7-4664-a34c-ea2ca790c373 to disappear +Oct 27 14:35:15.408: INFO: Pod pod-secrets-203451d8-b1f7-4664-a34c-ea2ca790c373 no longer exists +[AfterEach] [sig-node] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:35:15.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-1303" for this suite. +•{"msg":"PASSED [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":346,"completed":131,"skipped":2369,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Security Context When creating a pod with readOnlyRootFilesystem + should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:35:15.422: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename security-context-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-test-5343 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 +[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:35:15.586: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-86cb62b3-7633-432c-a29b-886cb0c115ca" in namespace "security-context-test-5343" to be "Succeeded or Failed" +Oct 27 14:35:15.590: INFO: Pod "busybox-readonly-false-86cb62b3-7633-432c-a29b-886cb0c115ca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.321581ms +Oct 27 14:35:17.596: INFO: Pod "busybox-readonly-false-86cb62b3-7633-432c-a29b-886cb0c115ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010166683s +Oct 27 14:35:17.596: INFO: Pod "busybox-readonly-false-86cb62b3-7633-432c-a29b-886cb0c115ca" satisfied condition "Succeeded or Failed" +[AfterEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:35:17.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "security-context-test-5343" for this suite. +•{"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":346,"completed":132,"skipped":2399,"failed":0} + +------------------------------ +[sig-scheduling] SchedulerPredicates [Serial] + validates that NodeSelector is respected if not matching [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:35:17.610: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename sched-pred +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-pred-3289 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 +Oct 27 14:35:17.762: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready +Oct 27 14:35:17.772: INFO: Waiting for terminating namespaces to be deleted... +Oct 27 14:35:17.776: INFO: +Logging pods the apiserver thinks is on node izgw81stpxs0bun38i01tfz before test +Oct 27 14:35:17.790: INFO: addons-nginx-ingress-controller-59fb958d58-lftrg from kube-system started at 2021-10-27 14:12:03 +0000 UTC (1 container statuses recorded) +Oct 27 14:35:17.790: INFO: Container nginx-ingress-controller ready: true, restart count 0 +Oct 27 14:35:17.790: INFO: addons-nginx-ingress-nginx-ingress-k8s-backend-56d9d84c8c-kbm9x from kube-system started at 2021-10-27 14:12:03 +0000 UTC (1 container statuses recorded) +Oct 27 14:35:17.790: INFO: Container nginx-ingress-nginx-ingress-k8s-backend ready: true, restart count 0 +Oct 27 14:35:17.790: INFO: apiserver-proxy-k22hx from kube-system started at 2021-10-27 13:52:34 +0000 UTC (2 container statuses recorded) +Oct 27 14:35:17.790: INFO: Container proxy ready: true, restart count 0 +Oct 27 14:35:17.790: INFO: Container sidecar ready: true, restart count 0 +Oct 27 14:35:17.790: INFO: calico-kube-controllers-56bcbfb5c5-dr6cw from kube-system started at 2021-10-27 13:52:34 +0000 UTC (1 container statuses recorded) +Oct 27 14:35:17.790: INFO: Container calico-kube-controllers ready: true, restart count 0 +Oct 27 14:35:17.790: INFO: calico-node-bn6rh from kube-system started at 2021-10-27 13:55:51 +0000 UTC (1 container statuses recorded) +Oct 27 14:35:17.790: INFO: Container calico-node ready: true, restart count 0 +Oct 27 14:35:17.790: INFO: calico-node-vertical-autoscaler-785b5f968-t4xfd from kube-system started at 2021-10-27 13:53:35 +0000 UTC (1 container statuses recorded) +Oct 27 14:35:17.790: INFO: Container autoscaler ready: true, restart count 0 +Oct 27 14:35:17.790: INFO: calico-typha-deploy-546b97d4b5-4h5cp from kube-system started at 2021-10-27 13:52:34 +0000 UTC (1 container statuses recorded) +Oct 27 14:35:17.790: INFO: Container calico-typha ready: true, restart count 0 +Oct 27 14:35:17.790: INFO: calico-typha-horizontal-autoscaler-5b58bb446c-sfqph from kube-system started at 2021-10-27 13:53:35 +0000 UTC (1 container statuses recorded) +Oct 27 14:35:17.790: INFO: Container autoscaler ready: true, restart count 0 +Oct 27 14:35:17.790: INFO: calico-typha-vertical-autoscaler-5c9655cddd-9fp9m from kube-system started at 2021-10-27 13:53:35 +0000 UTC (1 container statuses recorded) +Oct 27 14:35:17.790: INFO: Container autoscaler ready: true, restart count 0 +Oct 27 14:35:17.791: INFO: coredns-74d494ccd9-b4xr9 from kube-system started at 2021-10-27 14:12:03 +0000 UTC (1 container statuses recorded) +Oct 27 14:35:17.791: INFO: Container coredns ready: true, restart count 0 +Oct 27 14:35:17.791: INFO: coredns-74d494ccd9-tk5m9 from kube-system started at 2021-10-27 13:53:35 +0000 UTC (1 container statuses recorded) +Oct 27 14:35:17.791: INFO: Container coredns ready: true, restart count 0 +Oct 27 14:35:17.791: INFO: csi-disk-plugin-alicloud-zkfgk from kube-system started at 2021-10-27 13:52:34 +0000 UTC (3 container statuses recorded) +Oct 27 14:35:17.791: INFO: Container csi-diskplugin ready: true, restart count 0 +Oct 27 14:35:17.791: INFO: Container csi-liveness-probe ready: true, restart count 0 +Oct 27 14:35:17.791: INFO: Container driver-registrar ready: true, restart count 0 +Oct 27 14:35:17.791: INFO: kube-proxy-x6l7r from kube-system started at 2021-10-27 13:55:43 +0000 UTC (2 container statuses recorded) +Oct 27 14:35:17.791: INFO: Container conntrack-fix ready: true, restart count 0 +Oct 27 14:35:17.791: INFO: Container kube-proxy ready: true, restart count 0 +Oct 27 14:35:17.791: INFO: metrics-server-5d4664d665-hnljs from kube-system started at 2021-10-27 14:12:03 +0000 UTC (1 container statuses recorded) +Oct 27 14:35:17.791: INFO: Container metrics-server ready: true, restart count 0 +Oct 27 14:35:17.791: INFO: node-exporter-dh57q from kube-system started at 2021-10-27 13:52:34 +0000 UTC (1 container statuses recorded) +Oct 27 14:35:17.791: INFO: Container node-exporter ready: true, restart count 0 +Oct 27 14:35:17.791: INFO: node-problem-detector-wm6mk from kube-system started at 2021-10-27 14:19:42 +0000 UTC (1 container statuses recorded) +Oct 27 14:35:17.791: INFO: Container node-problem-detector ready: true, restart count 0 +Oct 27 14:35:17.791: INFO: vpn-shoot-78f675c9df-gzflt from kube-system started at 2021-10-27 14:12:03 +0000 UTC (1 container statuses recorded) +Oct 27 14:35:17.791: INFO: Container vpn-shoot ready: true, restart count 0 +Oct 27 14:35:17.791: INFO: dashboard-metrics-scraper-7ccbfc448f-4l6g7 from kubernetes-dashboard started at 2021-10-27 13:53:35 +0000 UTC (1 container statuses recorded) +Oct 27 14:35:17.791: INFO: Container dashboard-metrics-scraper ready: true, restart count 0 +Oct 27 14:35:17.791: INFO: kubernetes-dashboard-6cc9c75584-c47x8 from kubernetes-dashboard started at 2021-10-27 13:53:35 +0000 UTC (1 container statuses recorded) +Oct 27 14:35:17.791: INFO: Container kubernetes-dashboard ready: true, restart count 3 +Oct 27 14:35:17.791: INFO: +Logging pods the apiserver thinks is on node izgw89f23rpcwrl79tpgp1z before test +Oct 27 14:35:17.800: INFO: apiserver-proxy-vbdr6 from kube-system started at 2021-10-27 13:52:48 +0000 UTC (2 container statuses recorded) +Oct 27 14:35:17.800: INFO: Container proxy ready: true, restart count 0 +Oct 27 14:35:17.800: INFO: Container sidecar ready: true, restart count 0 +Oct 27 14:35:17.800: INFO: blackbox-exporter-65c549b94c-tkdlz from kube-system started at 2021-10-27 13:59:42 +0000 UTC (1 container statuses recorded) +Oct 27 14:35:17.800: INFO: Container blackbox-exporter ready: true, restart count 0 +Oct 27 14:35:17.800: INFO: calico-node-fxz56 from kube-system started at 2021-10-27 13:55:41 +0000 UTC (1 container statuses recorded) +Oct 27 14:35:17.800: INFO: Container calico-node ready: true, restart count 0 +Oct 27 14:35:17.800: INFO: csi-disk-plugin-alicloud-8kdpb from kube-system started at 2021-10-27 13:52:48 +0000 UTC (3 container statuses recorded) +Oct 27 14:35:17.800: INFO: Container csi-diskplugin ready: true, restart count 0 +Oct 27 14:35:17.800: INFO: Container csi-liveness-probe ready: true, restart count 0 +Oct 27 14:35:17.800: INFO: Container driver-registrar ready: true, restart count 0 +Oct 27 14:35:17.800: INFO: kube-proxy-2s7tx from kube-system started at 2021-10-27 13:55:44 +0000 UTC (2 container statuses recorded) +Oct 27 14:35:17.800: INFO: Container conntrack-fix ready: true, restart count 0 +Oct 27 14:35:17.800: INFO: Container kube-proxy ready: true, restart count 0 +Oct 27 14:35:17.800: INFO: node-exporter-zqsss from kube-system started at 2021-10-27 13:52:48 +0000 UTC (1 container statuses recorded) +Oct 27 14:35:17.800: INFO: Container node-exporter ready: true, restart count 0 +Oct 27 14:35:17.800: INFO: node-problem-detector-tddcd from kube-system started at 2021-10-27 14:19:43 +0000 UTC (1 container statuses recorded) +Oct 27 14:35:17.800: INFO: Container node-problem-detector ready: true, restart count 0 +Oct 27 14:35:17.800: INFO: tester from prestop-2699 started at 2021-10-27 14:35:03 +0000 UTC (1 container statuses recorded) +Oct 27 14:35:17.800: INFO: Container tester ready: true, restart count 0 +Oct 27 14:35:17.800: INFO: busybox-readonly-false-86cb62b3-7633-432c-a29b-886cb0c115ca from security-context-test-5343 started at 2021-10-27 14:35:15 +0000 UTC (1 container statuses recorded) +Oct 27 14:35:17.800: INFO: Container busybox-readonly-false-86cb62b3-7633-432c-a29b-886cb0c115ca ready: false, restart count 0 +[It] validates that NodeSelector is respected if not matching [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Trying to schedule Pod with nonempty NodeSelector. +STEP: Considering event: +Type = [Warning], Name = [restricted-pod.16b1e9e6f2875b4a], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match Pod's node affinity/selector.] +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:35:18.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-pred-3289" for this suite. +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 +•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":346,"completed":133,"skipped":2399,"failed":0} +SSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:35:18.854: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-2498 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name projected-configmap-test-volume-map-338b58cb-06d4-4ebb-9e52-481dcddc7751 +STEP: Creating a pod to test consume configMaps +Oct 27 14:35:19.024: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0e400cc9-bad4-4556-b7ff-fdd5c2a1da72" in namespace "projected-2498" to be "Succeeded or Failed" +Oct 27 14:35:19.029: INFO: Pod "pod-projected-configmaps-0e400cc9-bad4-4556-b7ff-fdd5c2a1da72": Phase="Pending", Reason="", readiness=false. Elapsed: 4.891548ms +Oct 27 14:35:21.035: INFO: Pod "pod-projected-configmaps-0e400cc9-bad4-4556-b7ff-fdd5c2a1da72": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010798779s +STEP: Saw pod success +Oct 27 14:35:21.035: INFO: Pod "pod-projected-configmaps-0e400cc9-bad4-4556-b7ff-fdd5c2a1da72" satisfied condition "Succeeded or Failed" +Oct 27 14:35:21.040: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod pod-projected-configmaps-0e400cc9-bad4-4556-b7ff-fdd5c2a1da72 container agnhost-container: +STEP: delete the pod +Oct 27 14:35:21.059: INFO: Waiting for pod pod-projected-configmaps-0e400cc9-bad4-4556-b7ff-fdd5c2a1da72 to disappear +Oct 27 14:35:21.063: INFO: Pod pod-projected-configmaps-0e400cc9-bad4-4556-b7ff-fdd5c2a1da72 no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:35:21.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-2498" for this suite. +•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":134,"skipped":2406,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should be able to update and delete ResourceQuota. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:35:21.076: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename resourcequota +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-1442 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to update and delete ResourceQuota. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a ResourceQuota +STEP: Getting a ResourceQuota +STEP: Updating a ResourceQuota +STEP: Verifying a ResourceQuota was modified +STEP: Deleting a ResourceQuota +STEP: Verifying the deleted ResourceQuota +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:35:21.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-1442" for this suite. +•{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":346,"completed":135,"skipped":2417,"failed":0} +SSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide container's memory request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:35:21.262: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-2557 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should provide container's memory request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 27 14:35:21.453: INFO: Waiting up to 5m0s for pod "downwardapi-volume-74d2c7d0-306c-45f2-bb57-77059e04bde1" in namespace "downward-api-2557" to be "Succeeded or Failed" +Oct 27 14:35:21.457: INFO: Pod "downwardapi-volume-74d2c7d0-306c-45f2-bb57-77059e04bde1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.243585ms +Oct 27 14:35:23.463: INFO: Pod "downwardapi-volume-74d2c7d0-306c-45f2-bb57-77059e04bde1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010071091s +STEP: Saw pod success +Oct 27 14:35:23.463: INFO: Pod "downwardapi-volume-74d2c7d0-306c-45f2-bb57-77059e04bde1" satisfied condition "Succeeded or Failed" +Oct 27 14:35:23.468: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod downwardapi-volume-74d2c7d0-306c-45f2-bb57-77059e04bde1 container client-container: +STEP: delete the pod +Oct 27 14:35:23.491: INFO: Waiting for pod downwardapi-volume-74d2c7d0-306c-45f2-bb57-77059e04bde1 to disappear +Oct 27 14:35:23.496: INFO: Pod downwardapi-volume-74d2c7d0-306c-45f2-bb57-77059e04bde1 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:35:23.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-2557" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":346,"completed":136,"skipped":2424,"failed":0} +SS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:35:23.509: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-5324 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0644 on tmpfs +Oct 27 14:35:23.671: INFO: Waiting up to 5m0s for pod "pod-6099b7d5-3866-4b49-8641-154786860d62" in namespace "emptydir-5324" to be "Succeeded or Failed" +Oct 27 14:35:23.676: INFO: Pod "pod-6099b7d5-3866-4b49-8641-154786860d62": Phase="Pending", Reason="", readiness=false. Elapsed: 4.543792ms +Oct 27 14:35:25.683: INFO: Pod "pod-6099b7d5-3866-4b49-8641-154786860d62": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011220396s +STEP: Saw pod success +Oct 27 14:35:25.683: INFO: Pod "pod-6099b7d5-3866-4b49-8641-154786860d62" satisfied condition "Succeeded or Failed" +Oct 27 14:35:25.687: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod pod-6099b7d5-3866-4b49-8641-154786860d62 container test-container: +STEP: delete the pod +Oct 27 14:35:25.708: INFO: Waiting for pod pod-6099b7d5-3866-4b49-8641-154786860d62 to disappear +Oct 27 14:35:25.712: INFO: Pod pod-6099b7d5-3866-4b49-8641-154786860d62 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:35:25.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-5324" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":137,"skipped":2426,"failed":0} +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Guestbook application + should create and stop a working application [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:35:25.725: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-1135 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should create and stop a working application [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating all guestbook components +Oct 27 14:35:25.876: INFO: apiVersion: v1 +kind: Service +metadata: + name: agnhost-replica + labels: + app: agnhost + role: replica + tier: backend +spec: + ports: + - port: 6379 + selector: + app: agnhost + role: replica + tier: backend + +Oct 27 14:35:25.876: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-1135 create -f -' +Oct 27 14:35:26.103: INFO: stderr: "" +Oct 27 14:35:26.103: INFO: stdout: "service/agnhost-replica created\n" +Oct 27 14:35:26.104: INFO: apiVersion: v1 +kind: Service +metadata: + name: agnhost-primary + labels: + app: agnhost + role: primary + tier: backend +spec: + ports: + - port: 6379 + targetPort: 6379 + selector: + app: agnhost + role: primary + tier: backend + +Oct 27 14:35:26.104: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-1135 create -f -' +Oct 27 14:35:26.281: INFO: stderr: "" +Oct 27 14:35:26.281: INFO: stdout: "service/agnhost-primary created\n" +Oct 27 14:35:26.281: INFO: apiVersion: v1 +kind: Service +metadata: + name: frontend + labels: + app: guestbook + tier: frontend +spec: + # if your cluster supports it, uncomment the following to automatically create + # an external load-balanced IP for the frontend service. + # type: LoadBalancer + ports: + - port: 80 + selector: + app: guestbook + tier: frontend + +Oct 27 14:35:26.281: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-1135 create -f -' +Oct 27 14:35:26.478: INFO: stderr: "" +Oct 27 14:35:26.478: INFO: stdout: "service/frontend created\n" +Oct 27 14:35:26.478: INFO: apiVersion: apps/v1 +kind: Deployment +metadata: + name: frontend +spec: + replicas: 3 + selector: + matchLabels: + app: guestbook + tier: frontend + template: + metadata: + labels: + app: guestbook + tier: frontend + spec: + containers: + - name: guestbook-frontend + image: k8s.gcr.io/e2e-test-images/agnhost:2.32 + args: [ "guestbook", "--backend-port", "6379" ] + resources: + requests: + cpu: 100m + memory: 100Mi + ports: + - containerPort: 80 + +Oct 27 14:35:26.478: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-1135 create -f -' +Oct 27 14:35:26.663: INFO: stderr: "" +Oct 27 14:35:26.663: INFO: stdout: "deployment.apps/frontend created\n" +Oct 27 14:35:26.663: INFO: apiVersion: apps/v1 +kind: Deployment +metadata: + name: agnhost-primary +spec: + replicas: 1 + selector: + matchLabels: + app: agnhost + role: primary + tier: backend + template: + metadata: + labels: + app: agnhost + role: primary + tier: backend + spec: + containers: + - name: primary + image: k8s.gcr.io/e2e-test-images/agnhost:2.32 + args: [ "guestbook", "--http-port", "6379" ] + resources: + requests: + cpu: 100m + memory: 100Mi + ports: + - containerPort: 6379 + +Oct 27 14:35:26.663: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-1135 create -f -' +Oct 27 14:35:26.868: INFO: stderr: "" +Oct 27 14:35:26.868: INFO: stdout: "deployment.apps/agnhost-primary created\n" +Oct 27 14:35:26.868: INFO: apiVersion: apps/v1 +kind: Deployment +metadata: + name: agnhost-replica +spec: + replicas: 2 + selector: + matchLabels: + app: agnhost + role: replica + tier: backend + template: + metadata: + labels: + app: agnhost + role: replica + tier: backend + spec: + containers: + - name: replica + image: k8s.gcr.io/e2e-test-images/agnhost:2.32 + args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] + resources: + requests: + cpu: 100m + memory: 100Mi + ports: + - containerPort: 6379 + +Oct 27 14:35:26.868: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-1135 create -f -' +Oct 27 14:35:27.060: INFO: stderr: "" +Oct 27 14:35:27.060: INFO: stdout: "deployment.apps/agnhost-replica created\n" +STEP: validating guestbook app +Oct 27 14:35:27.060: INFO: Waiting for all frontend pods to be Running. +Oct 27 14:35:32.113: INFO: Waiting for frontend to serve content. +Oct 27 14:35:32.173: INFO: Trying to add a new entry to the guestbook. +Oct 27 14:35:32.187: INFO: Verifying that added entry can be retrieved. +STEP: using delete to clean up resources +Oct 27 14:35:32.248: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-1135 delete --grace-period=0 --force -f -' +Oct 27 14:35:32.335: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Oct 27 14:35:32.335: INFO: stdout: "service \"agnhost-replica\" force deleted\n" +STEP: using delete to clean up resources +Oct 27 14:35:32.335: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-1135 delete --grace-period=0 --force -f -' +Oct 27 14:35:32.422: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Oct 27 14:35:32.422: INFO: stdout: "service \"agnhost-primary\" force deleted\n" +STEP: using delete to clean up resources +Oct 27 14:35:32.422: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-1135 delete --grace-period=0 --force -f -' +Oct 27 14:35:32.499: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Oct 27 14:35:32.499: INFO: stdout: "service \"frontend\" force deleted\n" +STEP: using delete to clean up resources +Oct 27 14:35:32.499: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-1135 delete --grace-period=0 --force -f -' +Oct 27 14:35:32.578: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Oct 27 14:35:32.578: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" +STEP: using delete to clean up resources +Oct 27 14:35:32.578: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-1135 delete --grace-period=0 --force -f -' +Oct 27 14:35:32.655: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Oct 27 14:35:32.655: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" +STEP: using delete to clean up resources +Oct 27 14:35:32.656: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-1135 delete --grace-period=0 --force -f -' +Oct 27 14:35:32.724: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Oct 27 14:35:32.724: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:35:32.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-1135" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":346,"completed":138,"skipped":2444,"failed":0} +SSS +------------------------------ +[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + should be able to convert from CR v1 to CR v2 [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:35:32.737: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename crd-webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-webhook-3716 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 +STEP: Setting up server cert +STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication +STEP: Deploying the custom resource conversion webhook pod +STEP: Wait for the deployment to be ready +Oct 27 14:35:33.554: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 27 14:35:36.580: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 +[It] should be able to convert from CR v1 to CR v2 [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:35:36.585: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Creating a v1 custom resource +STEP: v2 custom resource should be converted +[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:35:39.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-webhook-3716" for this suite. +[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 +•{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":346,"completed":139,"skipped":2447,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + listing mutating webhooks should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:35:39.869: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-2222 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 27 14:35:40.479: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 27 14:35:43.504: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] listing mutating webhooks should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Listing all of the created validation webhooks +STEP: Creating a configMap that should be mutated +STEP: Deleting the collection of validation webhooks +STEP: Creating a configMap that should not be mutated +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:35:43.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-2222" for this suite. +STEP: Destroying namespace "webhook-2222-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":346,"completed":140,"skipped":2504,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl server-side dry-run + should check if kubectl can dry-run update Pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:35:44.165: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-9808 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should check if kubectl can dry-run update Pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 +Oct 27 14:35:44.480: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9808 run e2e-test-httpd-pod --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 --pod-running-timeout=2m0s --labels=run=e2e-test-httpd-pod' +Oct 27 14:35:44.559: INFO: stderr: "" +Oct 27 14:35:44.559: INFO: stdout: "pod/e2e-test-httpd-pod created\n" +STEP: replace the image in the pod with server-side dry-run +Oct 27 14:35:44.560: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9808 patch pod e2e-test-httpd-pod -p {"spec":{"containers":[{"name": "e2e-test-httpd-pod","image": "k8s.gcr.io/e2e-test-images/busybox:1.29-1"}]}} --dry-run=server' +Oct 27 14:35:44.734: INFO: stderr: "" +Oct 27 14:35:44.734: INFO: stdout: "pod/e2e-test-httpd-pod patched\n" +STEP: verifying the pod e2e-test-httpd-pod has the right image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 +Oct 27 14:35:44.739: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9808 delete pods e2e-test-httpd-pod' +Oct 27 14:35:46.848: INFO: stderr: "" +Oct 27 14:35:46.848: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:35:46.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-9808" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":346,"completed":141,"skipped":2534,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Secrets + should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:35:46.861: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-58 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secret-namespace-3728 +STEP: Creating secret with name secret-test-7b91061c-2201-4dfa-a388-5a3727ff0445 +STEP: Creating a pod to test consume secrets +Oct 27 14:35:47.173: INFO: Waiting up to 5m0s for pod "pod-secrets-4ddfd9e7-b91b-4822-8441-a5017ff93b35" in namespace "secrets-58" to be "Succeeded or Failed" +Oct 27 14:35:47.178: INFO: Pod "pod-secrets-4ddfd9e7-b91b-4822-8441-a5017ff93b35": Phase="Pending", Reason="", readiness=false. Elapsed: 4.494793ms +Oct 27 14:35:49.183: INFO: Pod "pod-secrets-4ddfd9e7-b91b-4822-8441-a5017ff93b35": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010414643s +STEP: Saw pod success +Oct 27 14:35:49.184: INFO: Pod "pod-secrets-4ddfd9e7-b91b-4822-8441-a5017ff93b35" satisfied condition "Succeeded or Failed" +Oct 27 14:35:49.188: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod pod-secrets-4ddfd9e7-b91b-4822-8441-a5017ff93b35 container secret-volume-test: +STEP: delete the pod +Oct 27 14:35:49.206: INFO: Waiting for pod pod-secrets-4ddfd9e7-b91b-4822-8441-a5017ff93b35 to disappear +Oct 27 14:35:49.209: INFO: Pod pod-secrets-4ddfd9e7-b91b-4822-8441-a5017ff93b35 no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:35:49.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-58" for this suite. +STEP: Destroying namespace "secret-namespace-3728" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":346,"completed":142,"skipped":2550,"failed":0} +SSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:35:49.266: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-5049 +STEP: Waiting for a default service account to be provisioned in namespace +[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir volume type on node default medium +Oct 27 14:35:49.425: INFO: Waiting up to 5m0s for pod "pod-3bfcbcb8-fa7e-4d45-9ed6-29d47126c38e" in namespace "emptydir-5049" to be "Succeeded or Failed" +Oct 27 14:35:49.429: INFO: Pod "pod-3bfcbcb8-fa7e-4d45-9ed6-29d47126c38e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.287207ms +Oct 27 14:35:51.435: INFO: Pod "pod-3bfcbcb8-fa7e-4d45-9ed6-29d47126c38e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010286758s +STEP: Saw pod success +Oct 27 14:35:51.436: INFO: Pod "pod-3bfcbcb8-fa7e-4d45-9ed6-29d47126c38e" satisfied condition "Succeeded or Failed" +Oct 27 14:35:51.440: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod pod-3bfcbcb8-fa7e-4d45-9ed6-29d47126c38e container test-container: +STEP: delete the pod +Oct 27 14:35:51.459: INFO: Waiting for pod pod-3bfcbcb8-fa7e-4d45-9ed6-29d47126c38e to disappear +Oct 27 14:35:51.463: INFO: Pod pod-3bfcbcb8-fa7e-4d45-9ed6-29d47126c38e no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:35:51.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-5049" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":143,"skipped":2556,"failed":0} +S +------------------------------ +[sig-api-machinery] server version + should find the server version [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] server version + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:35:51.476: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename server-version +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in server-version-8780 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should find the server version [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Request ServerVersion +STEP: Confirm major version +Oct 27 14:35:51.629: INFO: Major version: 1 +STEP: Confirm minor version +Oct 27 14:35:51.629: INFO: cleanMinorVersion: 22 +Oct 27 14:35:51.629: INFO: Minor version: 22 +[AfterEach] [sig-api-machinery] server version + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:35:51.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "server-version-8780" for this suite. +•{"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":346,"completed":144,"skipped":2557,"failed":0} +SSSS +------------------------------ +[sig-node] Secrets + should fail to create secret due to empty secret key [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:35:51.642: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-3012 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should fail to create secret due to empty secret key [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating projection with secret that has name secret-emptykey-test-38d97469-916b-4dc8-ae3f-b4a4b3723aa8 +[AfterEach] [sig-node] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:35:51.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-3012" for this suite. +•{"msg":"PASSED [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]","total":346,"completed":145,"skipped":2561,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-node] Variable Expansion + should succeed in writing subpaths in container [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:35:51.804: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename var-expansion +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in var-expansion-3837 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should succeed in writing subpaths in container [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pod +STEP: waiting for pod running +STEP: creating a file in subpath +Oct 27 14:35:53.979: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-3837 PodName:var-expansion-c4f5011f-8851-4d2e-bf25-0ab4a848f3a5 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 14:35:53.979: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: test for file in mounted path +Oct 27 14:35:54.243: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-3837 PodName:var-expansion-c4f5011f-8851-4d2e-bf25-0ab4a848f3a5 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 14:35:54.243: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: updating the annotation value +Oct 27 14:35:54.973: INFO: Successfully updated pod "var-expansion-c4f5011f-8851-4d2e-bf25-0ab4a848f3a5" +STEP: waiting for annotated pod running +STEP: deleting the pod gracefully +Oct 27 14:35:54.978: INFO: Deleting pod "var-expansion-c4f5011f-8851-4d2e-bf25-0ab4a848f3a5" in namespace "var-expansion-3837" +Oct 27 14:35:54.983: INFO: Wait up to 5m0s for pod "var-expansion-c4f5011f-8851-4d2e-bf25-0ab4a848f3a5" to be fully deleted +[AfterEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:36:28.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-3837" for this suite. +•{"msg":"PASSED [sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","total":346,"completed":146,"skipped":2574,"failed":0} + +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:36:29.008: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-2848 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name configmap-test-volume-map-3912d394-06b0-40e9-9d8c-a1e56584abe7 +STEP: Creating a pod to test consume configMaps +Oct 27 14:36:29.186: INFO: Waiting up to 5m0s for pod "pod-configmaps-f7d2fae2-6dc4-4a63-ae9d-75d3a3a615d7" in namespace "configmap-2848" to be "Succeeded or Failed" +Oct 27 14:36:29.191: INFO: Pod "pod-configmaps-f7d2fae2-6dc4-4a63-ae9d-75d3a3a615d7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.945552ms +Oct 27 14:36:31.197: INFO: Pod "pod-configmaps-f7d2fae2-6dc4-4a63-ae9d-75d3a3a615d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010965685s +STEP: Saw pod success +Oct 27 14:36:31.197: INFO: Pod "pod-configmaps-f7d2fae2-6dc4-4a63-ae9d-75d3a3a615d7" satisfied condition "Succeeded or Failed" +Oct 27 14:36:31.202: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod pod-configmaps-f7d2fae2-6dc4-4a63-ae9d-75d3a3a615d7 container agnhost-container: +STEP: delete the pod +Oct 27 14:36:31.220: INFO: Waiting for pod pod-configmaps-f7d2fae2-6dc4-4a63-ae9d-75d3a3a615d7 to disappear +Oct 27 14:36:31.224: INFO: Pod pod-configmaps-f7d2fae2-6dc4-4a63-ae9d-75d3a3a615d7 no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:36:31.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-2848" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":346,"completed":147,"skipped":2574,"failed":0} +SSS +------------------------------ +[sig-cli] Kubectl client Proxy server + should support --unix-socket=/path [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:36:31.238: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-5700 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should support --unix-socket=/path [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Starting the proxy +Oct 27 14:36:31.386: INFO: Asynchronously running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-5700 proxy --unix-socket=/tmp/kubectl-proxy-unix092729395/test' +STEP: retrieving proxy /api/ output +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:36:31.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-5700" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":346,"completed":148,"skipped":2577,"failed":0} +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:36:31.438: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-6787 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0666 on node default medium +Oct 27 14:36:31.598: INFO: Waiting up to 5m0s for pod "pod-aa198490-9e62-4483-b9a6-15eb4bb4885d" in namespace "emptydir-6787" to be "Succeeded or Failed" +Oct 27 14:36:31.603: INFO: Pod "pod-aa198490-9e62-4483-b9a6-15eb4bb4885d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.895277ms +Oct 27 14:36:33.610: INFO: Pod "pod-aa198490-9e62-4483-b9a6-15eb4bb4885d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011616016s +STEP: Saw pod success +Oct 27 14:36:33.610: INFO: Pod "pod-aa198490-9e62-4483-b9a6-15eb4bb4885d" satisfied condition "Succeeded or Failed" +Oct 27 14:36:33.615: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod pod-aa198490-9e62-4483-b9a6-15eb4bb4885d container test-container: +STEP: delete the pod +Oct 27 14:36:33.635: INFO: Waiting for pod pod-aa198490-9e62-4483-b9a6-15eb4bb4885d to disappear +Oct 27 14:36:33.639: INFO: Pod pod-aa198490-9e62-4483-b9a6-15eb4bb4885d no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:36:33.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-6787" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":149,"skipped":2599,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Downward API + should provide host IP as an env var [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:36:33.653: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-5471 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide host IP as an env var [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward api env vars +Oct 27 14:36:33.819: INFO: Waiting up to 5m0s for pod "downward-api-eec30232-8805-47da-8b3e-8adc01567830" in namespace "downward-api-5471" to be "Succeeded or Failed" +Oct 27 14:36:33.825: INFO: Pod "downward-api-eec30232-8805-47da-8b3e-8adc01567830": Phase="Pending", Reason="", readiness=false. Elapsed: 5.544947ms +Oct 27 14:36:35.831: INFO: Pod "downward-api-eec30232-8805-47da-8b3e-8adc01567830": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012068163s +STEP: Saw pod success +Oct 27 14:36:35.831: INFO: Pod "downward-api-eec30232-8805-47da-8b3e-8adc01567830" satisfied condition "Succeeded or Failed" +Oct 27 14:36:35.836: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod downward-api-eec30232-8805-47da-8b3e-8adc01567830 container dapi-container: +STEP: delete the pod +Oct 27 14:36:35.856: INFO: Waiting for pod downward-api-eec30232-8805-47da-8b3e-8adc01567830 to disappear +Oct 27 14:36:35.861: INFO: Pod downward-api-eec30232-8805-47da-8b3e-8adc01567830 no longer exists +[AfterEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:36:35.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-5471" for this suite. +•{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":346,"completed":150,"skipped":2652,"failed":0} +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:36:35.874: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-7690 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 27 14:36:36.037: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bb646dce-1894-4438-a117-1f95b5e0cabc" in namespace "projected-7690" to be "Succeeded or Failed" +Oct 27 14:36:36.041: INFO: Pod "downwardapi-volume-bb646dce-1894-4438-a117-1f95b5e0cabc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.683806ms +Oct 27 14:36:38.048: INFO: Pod "downwardapi-volume-bb646dce-1894-4438-a117-1f95b5e0cabc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010977744s +STEP: Saw pod success +Oct 27 14:36:38.048: INFO: Pod "downwardapi-volume-bb646dce-1894-4438-a117-1f95b5e0cabc" satisfied condition "Succeeded or Failed" +Oct 27 14:36:38.052: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod downwardapi-volume-bb646dce-1894-4438-a117-1f95b5e0cabc container client-container: +STEP: delete the pod +Oct 27 14:36:38.070: INFO: Waiting for pod downwardapi-volume-bb646dce-1894-4438-a117-1f95b5e0cabc to disappear +Oct 27 14:36:38.075: INFO: Pod downwardapi-volume-bb646dce-1894-4438-a117-1f95b5e0cabc no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:36:38.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-7690" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":346,"completed":151,"skipped":2673,"failed":0} +S +------------------------------ +[sig-api-machinery] Garbage collector + should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:36:38.087: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename gc +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-5765 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the rc1 +STEP: create the rc2 +STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well +STEP: delete the rc simpletest-rc-to-be-deleted +STEP: wait for the rc to be deleted +STEP: Gathering metrics +Oct 27 14:36:48.337: INFO: For apiserver_request_total: +For apiserver_request_latency_seconds: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +W1027 14:36:48.337210 5703 metrics_grabber.go:151] Can't find kube-controller-manager pod. Grabbing metrics from kube-controller-manager is disabled. +Oct 27 14:36:48.337: INFO: Deleting pod "simpletest-rc-to-be-deleted-664v9" in namespace "gc-5765" +Oct 27 14:36:48.346: INFO: Deleting pod "simpletest-rc-to-be-deleted-9pnh5" in namespace "gc-5765" +Oct 27 14:36:48.355: INFO: Deleting pod "simpletest-rc-to-be-deleted-cvgdv" in namespace "gc-5765" +Oct 27 14:36:48.361: INFO: Deleting pod "simpletest-rc-to-be-deleted-gckcv" in namespace "gc-5765" +Oct 27 14:36:48.368: INFO: Deleting pod "simpletest-rc-to-be-deleted-h9vh9" in namespace "gc-5765" +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:36:48.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-5765" for this suite. +•{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":346,"completed":152,"skipped":2674,"failed":0} + +------------------------------ +[sig-network] DNS + should provide DNS for the cluster [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:36:48.385: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename dns +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-8847 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide DNS for the cluster [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8847.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8847.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done + +STEP: creating a pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Oct 27 14:36:50.785: INFO: DNS probes using dns-8847/dns-test-8481f417-8a24-4ca7-8d91-2d1727a951fc succeeded + +STEP: deleting the pod +[AfterEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:36:50.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-8847" for this suite. +•{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":346,"completed":153,"skipped":2674,"failed":0} +S +------------------------------ +[sig-api-machinery] ResourceQuota + should verify ResourceQuota with terminating scopes. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:36:50.807: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename resourcequota +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-442 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should verify ResourceQuota with terminating scopes. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a ResourceQuota with terminating scope +STEP: Ensuring ResourceQuota status is calculated +STEP: Creating a ResourceQuota with not terminating scope +STEP: Ensuring ResourceQuota status is calculated +STEP: Creating a long running pod +STEP: Ensuring resource quota with not terminating scope captures the pod usage +STEP: Ensuring resource quota with terminating scope ignored the pod usage +STEP: Deleting the pod +STEP: Ensuring resource quota status released the pod usage +STEP: Creating a terminating pod +STEP: Ensuring resource quota with terminating scope captures the pod usage +STEP: Ensuring resource quota with not terminating scope ignored the pod usage +STEP: Deleting the pod +STEP: Ensuring resource quota status released the pod usage +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:37:07.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-442" for this suite. +•{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":346,"completed":154,"skipped":2675,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicationController + should surface a failure condition on a common issue like exceeded quota [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:37:07.080: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename replication-controller +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replication-controller-236 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 +[It] should surface a failure condition on a common issue like exceeded quota [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:37:07.229: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace +STEP: Creating rc "condition-test" that asks for more than the allowed pod quota +STEP: Checking rc "condition-test" has the desired failure condition set +STEP: Scaling down rc "condition-test" to satisfy pod quota +Oct 27 14:37:08.265: INFO: Updating replication controller "condition-test" +STEP: Checking rc "condition-test" has no failure condition set +[AfterEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:37:08.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replication-controller-236" for this suite. +•{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":346,"completed":155,"skipped":2713,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:37:08.282: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-5948 +STEP: Waiting for a default service account to be provisioned in namespace +[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir volume type on tmpfs +Oct 27 14:37:08.452: INFO: Waiting up to 5m0s for pod "pod-0fe74f01-f372-4faf-a954-821486736f86" in namespace "emptydir-5948" to be "Succeeded or Failed" +Oct 27 14:37:08.456: INFO: Pod "pod-0fe74f01-f372-4faf-a954-821486736f86": Phase="Pending", Reason="", readiness=false. Elapsed: 4.211536ms +Oct 27 14:37:10.461: INFO: Pod "pod-0fe74f01-f372-4faf-a954-821486736f86": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009662145s +STEP: Saw pod success +Oct 27 14:37:10.462: INFO: Pod "pod-0fe74f01-f372-4faf-a954-821486736f86" satisfied condition "Succeeded or Failed" +Oct 27 14:37:10.466: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod pod-0fe74f01-f372-4faf-a954-821486736f86 container test-container: +STEP: delete the pod +Oct 27 14:37:10.527: INFO: Waiting for pod pod-0fe74f01-f372-4faf-a954-821486736f86 to disappear +Oct 27 14:37:10.531: INFO: Pod pod-0fe74f01-f372-4faf-a954-821486736f86 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:37:10.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-5948" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":156,"skipped":2761,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:37:10.545: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-2084 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 27 14:37:10.701: INFO: Waiting up to 5m0s for pod "downwardapi-volume-daceba5e-5022-4daf-b246-6fd0f54f3112" in namespace "projected-2084" to be "Succeeded or Failed" +Oct 27 14:37:10.706: INFO: Pod "downwardapi-volume-daceba5e-5022-4daf-b246-6fd0f54f3112": Phase="Pending", Reason="", readiness=false. Elapsed: 4.558074ms +Oct 27 14:37:12.712: INFO: Pod "downwardapi-volume-daceba5e-5022-4daf-b246-6fd0f54f3112": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010260193s +STEP: Saw pod success +Oct 27 14:37:12.712: INFO: Pod "downwardapi-volume-daceba5e-5022-4daf-b246-6fd0f54f3112" satisfied condition "Succeeded or Failed" +Oct 27 14:37:12.717: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod downwardapi-volume-daceba5e-5022-4daf-b246-6fd0f54f3112 container client-container: +STEP: delete the pod +Oct 27 14:37:12.782: INFO: Waiting for pod downwardapi-volume-daceba5e-5022-4daf-b246-6fd0f54f3112 to disappear +Oct 27 14:37:12.786: INFO: Pod downwardapi-volume-daceba5e-5022-4daf-b246-6fd0f54f3112 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:37:12.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-2084" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":346,"completed":157,"skipped":2776,"failed":0} +S +------------------------------ +[sig-node] Sysctls [LinuxOnly] [NodeConformance] + should support sysctls [MinimumKubeletVersion:1.21] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:36 +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:37:12.798: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename sysctl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sysctl-682 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:65 +[It] should support sysctls [MinimumKubeletVersion:1.21] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod with the kernel.shm_rmid_forced sysctl +STEP: Watching for error events or started pod +STEP: Waiting for pod completion +STEP: Checking that the pod succeeded +STEP: Getting logs from the pod +STEP: Checking that the sysctl is actually updated +[AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:37:15.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sysctl-682" for this suite. +•{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":346,"completed":158,"skipped":2777,"failed":0} +SSS +------------------------------ +[sig-node] InitContainer [NodeConformance] + should not start app containers if init containers fail on a RestartAlways pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:37:15.034: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename init-container +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in init-container-9316 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 +[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pod +Oct 27 14:37:15.183: INFO: PodSpec: initContainers in spec.initContainers +Oct 27 14:38:01.360: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-9fdd26dd-a7d6-4b3c-8352-33b7268f99aa", GenerateName:"", Namespace:"init-container-9316", SelfLink:"", UID:"9dc0342c-8ab7-4a12-89cb-ba36520283ae", ResourceVersion:"20162", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63770942235, loc:(*time.Location)(0xa09bc80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"183077237"}, Annotations:map[string]string{"cni.projectcalico.org/containerID":"2deafc0325d32b9b968871262b8d88517dac5910ee7554ca1c9c2d9fecb45228", "cni.projectcalico.org/podIP":"172.16.1.174/32", "cni.projectcalico.org/podIPs":"172.16.1.174/32", "kubernetes.io/psp":"e2e-test-privileged-psp"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"calico", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc000110678), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000110978), Subresource:"status"}, v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc000110a08), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000110a38), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc000110bb8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000110db0), Subresource:"status"}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-api-access-2jbnm", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc002ef4080), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"KUBERNETES_SERVICE_HOST", Value:"api.tmanu-jzf.it.internal.staging.k8s.ondemand.com", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-2jbnm", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"KUBERNETES_SERVICE_HOST", Value:"api.tmanu-jzf.it.internal.staging.k8s.ondemand.com", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-2jbnm", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.5", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"KUBERNETES_SERVICE_HOST", Value:"api.tmanu-jzf.it.internal.staging.k8s.ondemand.com", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-2jbnm", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0054ba128), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"izgw89f23rpcwrl79tpgp1z", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002b24000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0054ba1a0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0054ba1c0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0054ba1c8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0054ba1cc), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc0022fa030), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942235, loc:(*time.Location)(0xa09bc80)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942235, loc:(*time.Location)(0xa09bc80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942235, loc:(*time.Location)(0xa09bc80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942235, loc:(*time.Location)(0xa09bc80)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.250.8.35", PodIP:"172.16.1.174", PodIPs:[]v1.PodIP{v1.PodIP{IP:"172.16.1.174"}}, StartTime:(*v1.Time)(0xc000110f60), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002b240e0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002b24150)}, Ready:false, RestartCount:3, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592", ContainerID:"containerd://05b0d949f9ca7244060a76d3ed6009d557c4c76b5556bf7ae664bb0dedc8b349", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002ef41a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002ef4180), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.5", ImageID:"", ContainerID:"", Started:(*bool)(0xc0054ba24f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} +[AfterEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:38:01.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "init-container-9316" for this suite. +•{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":346,"completed":159,"skipped":2780,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-node] KubeletManagedEtcHosts + should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] KubeletManagedEtcHosts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:38:01.374: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-kubelet-etc-hosts-9972 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Setting up the test +STEP: Creating hostNetwork=false pod +Oct 27 14:38:01.547: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:38:03.554: INFO: The status of Pod test-pod is Running (Ready = true) +STEP: Creating hostNetwork=true pod +Oct 27 14:38:03.575: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:38:05.581: INFO: The status of Pod test-host-network-pod is Running (Ready = true) +STEP: Running the test +STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false +Oct 27 14:38:05.587: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9972 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 14:38:05.587: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 14:38:05.779: INFO: Exec stderr: "" +Oct 27 14:38:05.779: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9972 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 14:38:05.779: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 14:38:06.013: INFO: Exec stderr: "" +Oct 27 14:38:06.013: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9972 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 14:38:06.013: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 14:38:06.283: INFO: Exec stderr: "" +Oct 27 14:38:06.283: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9972 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 14:38:06.283: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 14:38:06.475: INFO: Exec stderr: "" +STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount +Oct 27 14:38:06.475: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9972 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 14:38:06.475: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 14:38:06.700: INFO: Exec stderr: "" +Oct 27 14:38:06.701: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9972 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 14:38:06.701: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 14:38:06.982: INFO: Exec stderr: "" +STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true +Oct 27 14:38:06.982: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9972 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 14:38:06.982: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 14:38:07.217: INFO: Exec stderr: "" +Oct 27 14:38:07.217: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9972 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 14:38:07.217: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 14:38:07.454: INFO: Exec stderr: "" +Oct 27 14:38:07.454: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9972 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 14:38:07.454: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 14:38:07.712: INFO: Exec stderr: "" +Oct 27 14:38:07.712: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9972 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 14:38:07.712: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 14:38:07.895: INFO: Exec stderr: "" +[AfterEach] [sig-node] KubeletManagedEtcHosts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:38:07.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-kubelet-etc-hosts-9972" for this suite. +•{"msg":"PASSED [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":160,"skipped":2796,"failed":0} +SSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be immutable if `immutable` field is set [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:38:07.909: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-8548 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be immutable if `immutable` field is set [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:38:08.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-8548" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","total":346,"completed":161,"skipped":2805,"failed":0} +S +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:38:08.126: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-7284 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0777 on node default medium +Oct 27 14:38:08.294: INFO: Waiting up to 5m0s for pod "pod-24a95876-368b-477d-b379-a184856c4fe4" in namespace "emptydir-7284" to be "Succeeded or Failed" +Oct 27 14:38:08.299: INFO: Pod "pod-24a95876-368b-477d-b379-a184856c4fe4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.2502ms +Oct 27 14:38:10.304: INFO: Pod "pod-24a95876-368b-477d-b379-a184856c4fe4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009996256s +STEP: Saw pod success +Oct 27 14:38:10.305: INFO: Pod "pod-24a95876-368b-477d-b379-a184856c4fe4" satisfied condition "Succeeded or Failed" +Oct 27 14:38:10.309: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod pod-24a95876-368b-477d-b379-a184856c4fe4 container test-container: +STEP: delete the pod +Oct 27 14:38:10.376: INFO: Waiting for pod pod-24a95876-368b-477d-b379-a184856c4fe4 to disappear +Oct 27 14:38:10.380: INFO: Pod pod-24a95876-368b-477d-b379-a184856c4fe4 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:38:10.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-7284" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":162,"skipped":2806,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:38:10.394: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-9348 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name projected-configmap-test-volume-map-664a78a1-0627-4a72-ae82-c69d8ecbc086 +STEP: Creating a pod to test consume configMaps +Oct 27 14:38:10.562: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b5b89523-6033-406c-9dd3-02d83fc894dd" in namespace "projected-9348" to be "Succeeded or Failed" +Oct 27 14:38:10.567: INFO: Pod "pod-projected-configmaps-b5b89523-6033-406c-9dd3-02d83fc894dd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.541559ms +Oct 27 14:38:12.572: INFO: Pod "pod-projected-configmaps-b5b89523-6033-406c-9dd3-02d83fc894dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010256793s +STEP: Saw pod success +Oct 27 14:38:12.572: INFO: Pod "pod-projected-configmaps-b5b89523-6033-406c-9dd3-02d83fc894dd" satisfied condition "Succeeded or Failed" +Oct 27 14:38:12.584: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod pod-projected-configmaps-b5b89523-6033-406c-9dd3-02d83fc894dd container agnhost-container: +STEP: delete the pod +Oct 27 14:38:12.647: INFO: Waiting for pod pod-projected-configmaps-b5b89523-6033-406c-9dd3-02d83fc894dd to disappear +Oct 27 14:38:12.651: INFO: Pod pod-projected-configmaps-b5b89523-6033-406c-9dd3-02d83fc894dd no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:38:12.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-9348" for this suite. +•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":346,"completed":163,"skipped":2829,"failed":0} +SSSSS +------------------------------ +[sig-network] Services + should serve multiport endpoints from pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:38:12.664: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-1318 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should serve multiport endpoints from pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating service multi-endpoint-test in namespace services-1318 +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1318 to expose endpoints map[] +Oct 27 14:38:12.837: INFO: successfully validated that service multi-endpoint-test in namespace services-1318 exposes endpoints map[] +STEP: Creating pod pod1 in namespace services-1318 +Oct 27 14:38:12.852: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:38:14.858: INFO: The status of Pod pod1 is Running (Ready = true) +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1318 to expose endpoints map[pod1:[100]] +Oct 27 14:38:14.879: INFO: successfully validated that service multi-endpoint-test in namespace services-1318 exposes endpoints map[pod1:[100]] +STEP: Creating pod pod2 in namespace services-1318 +Oct 27 14:38:14.895: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:38:16.900: INFO: The status of Pod pod2 is Running (Ready = true) +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1318 to expose endpoints map[pod1:[100] pod2:[101]] +Oct 27 14:38:16.925: INFO: successfully validated that service multi-endpoint-test in namespace services-1318 exposes endpoints map[pod1:[100] pod2:[101]] +STEP: Checking if the Service forwards traffic to pods +Oct 27 14:38:16.925: INFO: Creating new exec pod +Oct 27 14:38:19.949: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-1318 exec execpodxd62g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' +Oct 27 14:38:20.215: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" +Oct 27 14:38:20.215: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 14:38:20.215: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-1318 exec execpodxd62g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.24.167.220 80' +Oct 27 14:38:20.523: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.24.167.220 80\nConnection to 172.24.167.220 80 port [tcp/http] succeeded!\n" +Oct 27 14:38:20.523: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 14:38:20.523: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-1318 exec execpodxd62g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' +Oct 27 14:38:20.834: INFO: stderr: "+ nc -v -t -w 2 multi-endpoint-test 81\n+ echo hostName\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" +Oct 27 14:38:20.834: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 14:38:20.835: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-1318 exec execpodxd62g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.24.167.220 81' +Oct 27 14:38:21.117: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.24.167.220 81\nConnection to 172.24.167.220 81 port [tcp/*] succeeded!\n" +Oct 27 14:38:21.117: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +STEP: Deleting pod pod1 in namespace services-1318 +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1318 to expose endpoints map[pod2:[101]] +Oct 27 14:38:22.172: INFO: successfully validated that service multi-endpoint-test in namespace services-1318 exposes endpoints map[pod2:[101]] +STEP: Deleting pod pod2 in namespace services-1318 +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1318 to expose endpoints map[] +Oct 27 14:38:22.276: INFO: successfully validated that service multi-endpoint-test in namespace services-1318 exposes endpoints map[] +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:38:22.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-1318" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":346,"completed":164,"skipped":2834,"failed":0} +SSS +------------------------------ +[sig-node] RuntimeClass + should support RuntimeClasses API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] RuntimeClass + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:38:22.299: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename runtimeclass +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in runtimeclass-344 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support RuntimeClasses API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: getting /apis +STEP: getting /apis/node.k8s.io +STEP: getting /apis/node.k8s.io/v1 +STEP: creating +STEP: watching +Oct 27 14:38:22.530: INFO: starting watch +STEP: getting +STEP: listing +STEP: patching +STEP: updating +Oct 27 14:38:22.560: INFO: waiting for watch events with expected annotations +STEP: deleting +STEP: deleting a collection +[AfterEach] [sig-node] RuntimeClass + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:38:22.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "runtimeclass-344" for this suite. +•{"msg":"PASSED [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance]","total":346,"completed":165,"skipped":2837,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-instrumentation] Events + should delete a collection of events [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-instrumentation] Events + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:38:22.599: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename events +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in events-1984 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should delete a collection of events [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Create set of events +Oct 27 14:38:22.752: INFO: created test-event-1 +Oct 27 14:38:22.757: INFO: created test-event-2 +Oct 27 14:38:22.762: INFO: created test-event-3 +STEP: get a list of Events with a label in the current namespace +STEP: delete collection of events +Oct 27 14:38:22.766: INFO: requesting DeleteCollection of events +STEP: check that the list of events matches the requested quantity +Oct 27 14:38:22.779: INFO: requesting list of events to confirm quantity +[AfterEach] [sig-instrumentation] Events + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:38:22.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "events-1984" for this suite. +•{"msg":"PASSED [sig-instrumentation] Events should delete a collection of events [Conformance]","total":346,"completed":166,"skipped":2872,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] ConfigMap + should be consumable via the environment [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:38:22.795: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-3313 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable via the environment [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap configmap-3313/configmap-test-97891631-3c35-4b0a-a7c9-32696d62e681 +STEP: Creating a pod to test consume configMaps +Oct 27 14:38:22.955: INFO: Waiting up to 5m0s for pod "pod-configmaps-6eaa5540-981f-4c9f-bddd-92dcd7c0b50b" in namespace "configmap-3313" to be "Succeeded or Failed" +Oct 27 14:38:22.960: INFO: Pod "pod-configmaps-6eaa5540-981f-4c9f-bddd-92dcd7c0b50b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.909337ms +Oct 27 14:38:24.965: INFO: Pod "pod-configmaps-6eaa5540-981f-4c9f-bddd-92dcd7c0b50b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009837446s +STEP: Saw pod success +Oct 27 14:38:24.965: INFO: Pod "pod-configmaps-6eaa5540-981f-4c9f-bddd-92dcd7c0b50b" satisfied condition "Succeeded or Failed" +Oct 27 14:38:24.970: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod pod-configmaps-6eaa5540-981f-4c9f-bddd-92dcd7c0b50b container env-test: +STEP: delete the pod +Oct 27 14:38:24.988: INFO: Waiting for pod pod-configmaps-6eaa5540-981f-4c9f-bddd-92dcd7c0b50b to disappear +Oct 27 14:38:24.992: INFO: Pod pod-configmaps-6eaa5540-981f-4c9f-bddd-92dcd7c0b50b no longer exists +[AfterEach] [sig-node] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:38:24.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-3313" for this suite. +•{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":346,"completed":167,"skipped":2910,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] DisruptionController + should observe PodDisruptionBudget status updated [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:38:25.006: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename disruption +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in disruption-9015 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 +[It] should observe PodDisruptionBudget status updated [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Waiting for the pdb to be processed +STEP: Waiting for all pods to be running +Oct 27 14:38:25.198: INFO: running pods: 0 < 3 +[AfterEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:38:27.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "disruption-9015" for this suite. +•{"msg":"PASSED [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]","total":346,"completed":168,"skipped":2941,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Servers with support for Table transformation + should return a 406 for a backend which does not implement metadata [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Servers with support for Table transformation + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:38:27.220: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename tables +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in tables-4467 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] Servers with support for Table transformation + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 +[It] should return a 406 for a backend which does not implement metadata [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-api-machinery] Servers with support for Table transformation + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:38:27.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "tables-4467" for this suite. +•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":346,"completed":169,"skipped":2970,"failed":0} +SSSSSSS +------------------------------ +[sig-storage] Secrets + should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:38:27.387: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-2500 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating secret with name secret-test-216abc61-7c57-4175-9045-b8d652aaf9f0 +STEP: Creating a pod to test consume secrets +Oct 27 14:38:27.548: INFO: Waiting up to 5m0s for pod "pod-secrets-4a266987-d7be-4e5c-ae74-6fe439a099d3" in namespace "secrets-2500" to be "Succeeded or Failed" +Oct 27 14:38:27.559: INFO: Pod "pod-secrets-4a266987-d7be-4e5c-ae74-6fe439a099d3": Phase="Pending", Reason="", readiness=false. Elapsed: 11.007195ms +Oct 27 14:38:29.564: INFO: Pod "pod-secrets-4a266987-d7be-4e5c-ae74-6fe439a099d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.016307816s +STEP: Saw pod success +Oct 27 14:38:29.564: INFO: Pod "pod-secrets-4a266987-d7be-4e5c-ae74-6fe439a099d3" satisfied condition "Succeeded or Failed" +Oct 27 14:38:29.569: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod pod-secrets-4a266987-d7be-4e5c-ae74-6fe439a099d3 container secret-volume-test: +STEP: delete the pod +Oct 27 14:38:29.588: INFO: Waiting for pod pod-secrets-4a266987-d7be-4e5c-ae74-6fe439a099d3 to disappear +Oct 27 14:38:29.592: INFO: Pod pod-secrets-4a266987-d7be-4e5c-ae74-6fe439a099d3 no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:38:29.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-2500" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":346,"completed":170,"skipped":2977,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Watchers + should be able to start watching from a specific resource version [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:38:29.606: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename watch +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in watch-9833 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to start watching from a specific resource version [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a new configmap +STEP: modifying the configmap once +STEP: modifying the configmap a second time +STEP: deleting the configmap +STEP: creating a watch on configmaps from the resource version returned by the first update +STEP: Expecting to observe notifications for all changes to the configmap after the first update +Oct 27 14:38:29.785: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-9833 7d900020-2435-40f6-8887-4fa162e2e941 20559 0 2021-10-27 14:38:29 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2021-10-27 14:38:29 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 27 14:38:29.785: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-9833 7d900020-2435-40f6-8887-4fa162e2e941 20560 0 2021-10-27 14:38:29 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2021-10-27 14:38:29 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +[AfterEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:38:29.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "watch-9833" for this suite. +•{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":346,"completed":171,"skipped":3005,"failed":0} +SSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a replica set. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:38:29.795: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename resourcequota +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-2132 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should create a ResourceQuota and capture the life of a replica set. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Counting existing ResourceQuota +STEP: Creating a ResourceQuota +STEP: Ensuring resource quota status is calculated +STEP: Creating a ReplicaSet +STEP: Ensuring resource quota status captures replicaset creation +STEP: Deleting a ReplicaSet +STEP: Ensuring resource quota status released usage +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:38:40.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-2132" for this suite. +•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":346,"completed":172,"skipped":3009,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Probing container + with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:38:41.010: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-probe +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-2494 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 +[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:39:41.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-2494" for this suite. +•{"msg":"PASSED [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":346,"completed":173,"skipped":3045,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Job + should delete a job [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Job + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:39:41.207: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename job +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in job-7709 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should delete a job [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a job +STEP: Ensuring active pods == parallelism +STEP: delete a job +STEP: deleting Job.batch foo in namespace job-7709, will wait for the garbage collector to delete the pods +Oct 27 14:39:43.450: INFO: Deleting Job.batch foo took: 6.002174ms +Oct 27 14:39:43.550: INFO: Terminating Job.batch foo pods took: 100.784058ms +STEP: Ensuring job was deleted +[AfterEach] [sig-apps] Job + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:40:15.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "job-7709" for this suite. +•{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":346,"completed":174,"skipped":3113,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-network] Services + should have session affinity work for NodePort service [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:40:15.971: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-641 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating service in namespace services-641 +STEP: creating service affinity-nodeport in namespace services-641 +STEP: creating replication controller affinity-nodeport in namespace services-641 +I1027 14:40:16.136194 5703 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-641, replica count: 3 +I1027 14:40:19.187550 5703 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Oct 27 14:40:19.204: INFO: Creating new exec pod +Oct 27 14:40:22.235: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-641 exec execpod-affinitytmvc4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80' +Oct 27 14:40:22.528: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport 80\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\n" +Oct 27 14:40:22.528: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 14:40:22.528: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-641 exec execpod-affinitytmvc4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.25.22.7 80' +Oct 27 14:40:22.961: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.25.22.7 80\nConnection to 172.25.22.7 80 port [tcp/http] succeeded!\n" +Oct 27 14:40:22.961: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 14:40:22.961: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-641 exec execpod-affinitytmvc4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.250.8.34 31591' +Oct 27 14:40:23.350: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.250.8.34 31591\nConnection to 10.250.8.34 31591 port [tcp/*] succeeded!\n" +Oct 27 14:40:23.350: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 14:40:23.350: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-641 exec execpod-affinitytmvc4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.250.8.35 31591' +Oct 27 14:40:23.611: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.250.8.35 31591\nConnection to 10.250.8.35 31591 port [tcp/*] succeeded!\n" +Oct 27 14:40:23.611: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 14:40:23.611: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-641 exec execpod-affinitytmvc4 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.250.8.34:31591/ ; done' +Oct 27 14:40:23.938: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31591/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31591/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31591/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31591/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31591/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31591/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31591/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31591/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31591/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31591/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31591/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31591/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31591/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31591/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31591/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31591/\n" +Oct 27 14:40:23.938: INFO: stdout: "\naffinity-nodeport-zs8tr\naffinity-nodeport-zs8tr\naffinity-nodeport-zs8tr\naffinity-nodeport-zs8tr\naffinity-nodeport-zs8tr\naffinity-nodeport-zs8tr\naffinity-nodeport-zs8tr\naffinity-nodeport-zs8tr\naffinity-nodeport-zs8tr\naffinity-nodeport-zs8tr\naffinity-nodeport-zs8tr\naffinity-nodeport-zs8tr\naffinity-nodeport-zs8tr\naffinity-nodeport-zs8tr\naffinity-nodeport-zs8tr\naffinity-nodeport-zs8tr" +Oct 27 14:40:23.938: INFO: Received response from host: affinity-nodeport-zs8tr +Oct 27 14:40:23.938: INFO: Received response from host: affinity-nodeport-zs8tr +Oct 27 14:40:23.938: INFO: Received response from host: affinity-nodeport-zs8tr +Oct 27 14:40:23.938: INFO: Received response from host: affinity-nodeport-zs8tr +Oct 27 14:40:23.938: INFO: Received response from host: affinity-nodeport-zs8tr +Oct 27 14:40:23.938: INFO: Received response from host: affinity-nodeport-zs8tr +Oct 27 14:40:23.938: INFO: Received response from host: affinity-nodeport-zs8tr +Oct 27 14:40:23.938: INFO: Received response from host: affinity-nodeport-zs8tr +Oct 27 14:40:23.938: INFO: Received response from host: affinity-nodeport-zs8tr +Oct 27 14:40:23.938: INFO: Received response from host: affinity-nodeport-zs8tr +Oct 27 14:40:23.938: INFO: Received response from host: affinity-nodeport-zs8tr +Oct 27 14:40:23.938: INFO: Received response from host: affinity-nodeport-zs8tr +Oct 27 14:40:23.938: INFO: Received response from host: affinity-nodeport-zs8tr +Oct 27 14:40:23.938: INFO: Received response from host: affinity-nodeport-zs8tr +Oct 27 14:40:23.938: INFO: Received response from host: affinity-nodeport-zs8tr +Oct 27 14:40:23.938: INFO: Received response from host: affinity-nodeport-zs8tr +Oct 27 14:40:23.938: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-nodeport in namespace services-641, will wait for the garbage collector to delete the pods +Oct 27 14:40:24.010: INFO: Deleting ReplicationController affinity-nodeport took: 6.203834ms +Oct 27 14:40:24.111: INFO: Terminating ReplicationController affinity-nodeport pods took: 100.979877ms +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:40:26.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-641" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":346,"completed":175,"skipped":3123,"failed":0} +SSSS +------------------------------ +[sig-node] Pods + should support retrieving logs from the container over websockets [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:40:26.538: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename pods +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-9615 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:188 +[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:40:26.683: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: creating the pod +STEP: submitting the pod to kubernetes +Oct 27 14:40:26.699: INFO: The status of Pod pod-logs-websocket-41bc3789-7715-4109-a7ef-58c5c8894b9e is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:40:28.705: INFO: The status of Pod pod-logs-websocket-41bc3789-7715-4109-a7ef-58c5c8894b9e is Running (Ready = true) +[AfterEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:40:28.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-9615" for this suite. +•{"msg":"PASSED [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":346,"completed":176,"skipped":3127,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with configmap pod [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:40:28.792: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename subpath +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in subpath-2608 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] Atomic writer volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 +STEP: Setting up data +[It] should support subpaths with configmap pod [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating pod pod-subpath-test-configmap-6l6q +STEP: Creating a pod to test atomic-volume-subpath +Oct 27 14:40:28.963: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-6l6q" in namespace "subpath-2608" to be "Succeeded or Failed" +Oct 27 14:40:28.969: INFO: Pod "pod-subpath-test-configmap-6l6q": Phase="Pending", Reason="", readiness=false. Elapsed: 5.25314ms +Oct 27 14:40:30.976: INFO: Pod "pod-subpath-test-configmap-6l6q": Phase="Running", Reason="", readiness=true. Elapsed: 2.012622591s +Oct 27 14:40:32.982: INFO: Pod "pod-subpath-test-configmap-6l6q": Phase="Running", Reason="", readiness=true. Elapsed: 4.018555345s +Oct 27 14:40:34.987: INFO: Pod "pod-subpath-test-configmap-6l6q": Phase="Running", Reason="", readiness=true. Elapsed: 6.023743176s +Oct 27 14:40:36.994: INFO: Pod "pod-subpath-test-configmap-6l6q": Phase="Running", Reason="", readiness=true. Elapsed: 8.030053956s +Oct 27 14:40:38.999: INFO: Pod "pod-subpath-test-configmap-6l6q": Phase="Running", Reason="", readiness=true. Elapsed: 10.035859364s +Oct 27 14:40:41.006: INFO: Pod "pod-subpath-test-configmap-6l6q": Phase="Running", Reason="", readiness=true. Elapsed: 12.042194459s +Oct 27 14:40:43.012: INFO: Pod "pod-subpath-test-configmap-6l6q": Phase="Running", Reason="", readiness=true. Elapsed: 14.048082239s +Oct 27 14:40:45.018: INFO: Pod "pod-subpath-test-configmap-6l6q": Phase="Running", Reason="", readiness=true. Elapsed: 16.054862921s +Oct 27 14:40:47.025: INFO: Pod "pod-subpath-test-configmap-6l6q": Phase="Running", Reason="", readiness=true. Elapsed: 18.061713085s +Oct 27 14:40:49.032: INFO: Pod "pod-subpath-test-configmap-6l6q": Phase="Running", Reason="", readiness=true. Elapsed: 20.068318154s +Oct 27 14:40:51.039: INFO: Pod "pod-subpath-test-configmap-6l6q": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.075096369s +STEP: Saw pod success +Oct 27 14:40:51.039: INFO: Pod "pod-subpath-test-configmap-6l6q" satisfied condition "Succeeded or Failed" +Oct 27 14:40:51.045: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod pod-subpath-test-configmap-6l6q container test-container-subpath-configmap-6l6q: +STEP: delete the pod +Oct 27 14:40:51.064: INFO: Waiting for pod pod-subpath-test-configmap-6l6q to disappear +Oct 27 14:40:51.068: INFO: Pod pod-subpath-test-configmap-6l6q no longer exists +STEP: Deleting pod pod-subpath-test-configmap-6l6q +Oct 27 14:40:51.068: INFO: Deleting pod "pod-subpath-test-configmap-6l6q" in namespace "subpath-2608" +[AfterEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:40:51.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "subpath-2608" for this suite. +•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":346,"completed":177,"skipped":3168,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a replication controller. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:40:51.085: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename resourcequota +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-8987 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Counting existing ResourceQuota +STEP: Creating a ResourceQuota +STEP: Ensuring resource quota status is calculated +STEP: Creating a ReplicationController +STEP: Ensuring resource quota status captures replication controller creation +STEP: Deleting a ReplicationController +STEP: Ensuring resource quota status released usage +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:41:02.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-8987" for this suite. +•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":346,"completed":178,"skipped":3198,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-apps] Deployment + RecreateDeployment should delete old pods and create new ones [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:41:02.315: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename deployment +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in deployment-1094 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 +[It] RecreateDeployment should delete old pods and create new ones [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:41:02.466: INFO: Creating deployment "test-recreate-deployment" +Oct 27 14:41:02.472: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 +Oct 27 14:41:02.480: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created +Oct 27 14:41:04.491: INFO: Waiting deployment "test-recreate-deployment" to complete +Oct 27 14:41:04.495: INFO: Triggering a new rollout for deployment "test-recreate-deployment" +Oct 27 14:41:04.506: INFO: Updating deployment test-recreate-deployment +Oct 27 14:41:04.506: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 +Oct 27 14:41:04.567: INFO: Deployment "test-recreate-deployment": +&Deployment{ObjectMeta:{test-recreate-deployment deployment-1094 fc950fe0-4ef9-4f9a-b592-92d4fffdce98 21612 2 2021-10-27 14:41:02 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-10-27 14:41:04 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 14:41:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003792f08 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2021-10-27 14:41:04 +0000 UTC,LastTransitionTime:2021-10-27 14:41:04 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-85d47dcb4" is progressing.,LastUpdateTime:2021-10-27 14:41:04 +0000 UTC,LastTransitionTime:2021-10-27 14:41:02 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} + +Oct 27 14:41:04.572: INFO: New ReplicaSet "test-recreate-deployment-85d47dcb4" of Deployment "test-recreate-deployment": +&ReplicaSet{ObjectMeta:{test-recreate-deployment-85d47dcb4 deployment-1094 80e2c044-9f4e-40af-9c36-ddc8f4e0c5c5 21611 1 2021-10-27 14:41:04 +0000 UTC map[name:sample-pod-3 pod-template-hash:85d47dcb4] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment fc950fe0-4ef9-4f9a-b592-92d4fffdce98 0xc0037911e0 0xc0037911e1}] [] [{kube-controller-manager Update apps/v1 2021-10-27 14:41:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fc950fe0-4ef9-4f9a-b592-92d4fffdce98\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 14:41:04 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 85d47dcb4,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:85d47dcb4] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003791278 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Oct 27 14:41:04.572: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": +Oct 27 14:41:04.572: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-6cb8b65c46 deployment-1094 6534789d-68ce-4bfa-95d8-e1d75293960e 21604 2 2021-10-27 14:41:02 +0000 UTC map[name:sample-pod-3 pod-template-hash:6cb8b65c46] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment fc950fe0-4ef9-4f9a-b592-92d4fffdce98 0xc0037910c7 0xc0037910c8}] [] [{kube-controller-manager Update apps/v1 2021-10-27 14:41:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fc950fe0-4ef9-4f9a-b592-92d4fffdce98\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 14:41:04 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6cb8b65c46,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:6cb8b65c46] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003791178 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Oct 27 14:41:04.576: INFO: Pod "test-recreate-deployment-85d47dcb4-pxlrx" is not available: +&Pod{ObjectMeta:{test-recreate-deployment-85d47dcb4-pxlrx test-recreate-deployment-85d47dcb4- deployment-1094 000f4788-ec26-49e2-9fb8-8006af75e988 21613 0 2021-10-27 14:41:04 +0000 UTC map[name:sample-pod-3 pod-template-hash:85d47dcb4] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet test-recreate-deployment-85d47dcb4 80e2c044-9f4e-40af-9c36-ddc8f4e0c5c5 0xc0037916d0 0xc0037916d1}] [] [{kube-controller-manager Update v1 2021-10-27 14:41:04 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"80e2c044-9f4e-40af-9c36-ddc8f4e0c5c5\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 14:41:04 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-bg4xh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmanu-jzf.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bg4xh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:izgw89f23rpcwrl79tpgp1z,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:41:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:41:04 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:41:04 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:41:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.8.35,PodIP:,StartTime:2021-10-27 14:41:04 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:41:04.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-1094" for this suite. +•{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":346,"completed":179,"skipped":3210,"failed":0} +SSS +------------------------------ +[sig-node] Lease + lease API should be available [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Lease + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:41:04.589: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename lease-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in lease-test-5828 +STEP: Waiting for a default service account to be provisioned in namespace +[It] lease API should be available [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-node] Lease + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:41:04.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "lease-test-5828" for this suite. +•{"msg":"PASSED [sig-node] Lease lease API should be available [Conformance]","total":346,"completed":180,"skipped":3213,"failed":0} + +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:41:04.828: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-7717 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating secret with name secret-test-81313fef-7c36-4d04-8b9d-7a9dc4976efa +STEP: Creating a pod to test consume secrets +Oct 27 14:41:04.991: INFO: Waiting up to 5m0s for pod "pod-secrets-6c4e62c3-36d5-4c35-9eb4-110218683370" in namespace "secrets-7717" to be "Succeeded or Failed" +Oct 27 14:41:04.995: INFO: Pod "pod-secrets-6c4e62c3-36d5-4c35-9eb4-110218683370": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073148ms +Oct 27 14:41:07.001: INFO: Pod "pod-secrets-6c4e62c3-36d5-4c35-9eb4-110218683370": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010182432s +STEP: Saw pod success +Oct 27 14:41:07.002: INFO: Pod "pod-secrets-6c4e62c3-36d5-4c35-9eb4-110218683370" satisfied condition "Succeeded or Failed" +Oct 27 14:41:07.006: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod pod-secrets-6c4e62c3-36d5-4c35-9eb4-110218683370 container secret-volume-test: +STEP: delete the pod +Oct 27 14:41:07.064: INFO: Waiting for pod pod-secrets-6c4e62c3-36d5-4c35-9eb4-110218683370 to disappear +Oct 27 14:41:07.069: INFO: Pod pod-secrets-6c4e62c3-36d5-4c35-9eb4-110218683370 no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:41:07.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-7717" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":346,"completed":181,"skipped":3213,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:41:07.082: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-8411 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating secret with name projected-secret-test-4ff59158-0ce6-4d3b-b923-537559cf497d +STEP: Creating a pod to test consume secrets +Oct 27 14:41:07.248: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2c8fa15f-e5e1-4ddd-b844-3f236a77eee4" in namespace "projected-8411" to be "Succeeded or Failed" +Oct 27 14:41:07.252: INFO: Pod "pod-projected-secrets-2c8fa15f-e5e1-4ddd-b844-3f236a77eee4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.415757ms +Oct 27 14:41:09.259: INFO: Pod "pod-projected-secrets-2c8fa15f-e5e1-4ddd-b844-3f236a77eee4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010455836s +STEP: Saw pod success +Oct 27 14:41:09.259: INFO: Pod "pod-projected-secrets-2c8fa15f-e5e1-4ddd-b844-3f236a77eee4" satisfied condition "Succeeded or Failed" +Oct 27 14:41:09.264: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod pod-projected-secrets-2c8fa15f-e5e1-4ddd-b844-3f236a77eee4 container secret-volume-test: +STEP: delete the pod +Oct 27 14:41:09.326: INFO: Waiting for pod pod-projected-secrets-2c8fa15f-e5e1-4ddd-b844-3f236a77eee4 to disappear +Oct 27 14:41:09.330: INFO: Pod pod-projected-secrets-2c8fa15f-e5e1-4ddd-b844-3f236a77eee4 no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:41:09.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-8411" for this suite. +•{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":346,"completed":182,"skipped":3236,"failed":0} +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should mutate custom resource with pruning [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:41:09.343: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-2068 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 27 14:41:09.746: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 27 14:41:12.773: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should mutate custom resource with pruning [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:41:12.779: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Registering the mutating webhook for custom resource e2e-test-webhook-3214-crds.webhook.example.com via the AdmissionRegistration API +STEP: Creating a custom resource that should be mutated by the webhook +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:41:16.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-2068" for this suite. +STEP: Destroying namespace "webhook-2068-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":346,"completed":183,"skipped":3258,"failed":0} +SSS +------------------------------ +[sig-network] Proxy version v1 + should proxy through a service and a pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] version v1 + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:41:16.126: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename proxy +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in proxy-1492 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should proxy through a service and a pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: starting an echo server on multiple ports +STEP: creating replication controller proxy-service-n6q66 in namespace proxy-1492 +I1027 14:41:16.295853 5703 runners.go:190] Created replication controller with name: proxy-service-n6q66, namespace: proxy-1492, replica count: 1 +I1027 14:41:17.346694 5703 runners.go:190] proxy-service-n6q66 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I1027 14:41:18.347859 5703 runners.go:190] proxy-service-n6q66 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Oct 27 14:41:18.353: INFO: setup took 2.072498905s, starting test cases +STEP: running 16 cases, 20 attempts per case, 320 total attempts +Oct 27 14:41:18.463: INFO: (0) /api/v1/namespaces/proxy-1492/pods/proxy-service-n6q66-s9d4l:1080/proxy/: test<... (200; 110.255889ms) +Oct 27 14:41:18.466: INFO: (0) /api/v1/namespaces/proxy-1492/pods/http:proxy-service-n6q66-s9d4l:1080/proxy/: ... (200; 113.399125ms) +Oct 27 14:41:18.466: INFO: (0) /api/v1/namespaces/proxy-1492/pods/proxy-service-n6q66-s9d4l:160/proxy/: foo (200; 113.576478ms) +Oct 27 14:41:18.468: INFO: (0) /api/v1/namespaces/proxy-1492/services/http:proxy-service-n6q66:portname2/proxy/: bar (200; 115.009704ms) +Oct 27 14:41:18.472: INFO: (0) /api/v1/namespaces/proxy-1492/pods/proxy-service-n6q66-s9d4l/proxy/: test (200; 119.14039ms) +Oct 27 14:41:18.472: INFO: (0) /api/v1/namespaces/proxy-1492/pods/http:proxy-service-n6q66-s9d4l:162/proxy/: bar (200; 119.104793ms) +Oct 27 14:41:18.472: INFO: (0) /api/v1/namespaces/proxy-1492/services/proxy-service-n6q66:portname1/proxy/: foo (200; 119.17999ms) +Oct 27 14:41:18.472: INFO: (0) /api/v1/namespaces/proxy-1492/pods/http:proxy-service-n6q66-s9d4l:160/proxy/: foo (200; 119.099802ms) +Oct 27 14:41:18.472: INFO: (0) /api/v1/namespaces/proxy-1492/pods/proxy-service-n6q66-s9d4l:162/proxy/: bar (200; 119.285877ms) +Oct 27 14:41:18.472: INFO: (0) /api/v1/namespaces/proxy-1492/services/proxy-service-n6q66:portname2/proxy/: bar (200; 119.212195ms) +Oct 27 14:41:18.472: INFO: (0) /api/v1/namespaces/proxy-1492/services/http:proxy-service-n6q66:portname1/proxy/: foo (200; 119.196282ms) +Oct 27 14:41:18.474: INFO: (0) /api/v1/namespaces/proxy-1492/pods/https:proxy-service-n6q66-s9d4l:443/proxy/: test<... (200; 12.534573ms) +Oct 27 14:41:18.489: INFO: (1) /api/v1/namespaces/proxy-1492/services/proxy-service-n6q66:portname1/proxy/: foo (200; 12.586957ms) +Oct 27 14:41:18.489: INFO: (1) /api/v1/namespaces/proxy-1492/pods/proxy-service-n6q66-s9d4l/proxy/: test (200; 12.571735ms) +Oct 27 14:41:18.489: INFO: (1) /api/v1/namespaces/proxy-1492/pods/http:proxy-service-n6q66-s9d4l:1080/proxy/: ... (200; 12.563905ms) +Oct 27 14:41:18.489: INFO: (1) /api/v1/namespaces/proxy-1492/services/https:proxy-service-n6q66:tlsportname1/proxy/: tls baz (200; 12.563828ms) +Oct 27 14:41:18.494: INFO: (1) /api/v1/namespaces/proxy-1492/pods/http:proxy-service-n6q66-s9d4l:160/proxy/: foo (200; 17.519969ms) +Oct 27 14:41:18.494: INFO: (1) /api/v1/namespaces/proxy-1492/pods/http:proxy-service-n6q66-s9d4l:162/proxy/: bar (200; 17.555758ms) +Oct 27 14:41:18.563: INFO: (1) /api/v1/namespaces/proxy-1492/pods/proxy-service-n6q66-s9d4l:160/proxy/: foo (200; 86.270014ms) +Oct 27 14:41:18.563: INFO: (1) /api/v1/namespaces/proxy-1492/services/http:proxy-service-n6q66:portname2/proxy/: bar (200; 86.276438ms) +Oct 27 14:41:18.574: INFO: (2) /api/v1/namespaces/proxy-1492/pods/https:proxy-service-n6q66-s9d4l:443/proxy/: ... (200; 11.289679ms) +Oct 27 14:41:18.574: INFO: (2) /api/v1/namespaces/proxy-1492/services/https:proxy-service-n6q66:tlsportname1/proxy/: tls baz (200; 11.42266ms) +Oct 27 14:41:18.574: INFO: (2) /api/v1/namespaces/proxy-1492/services/https:proxy-service-n6q66:tlsportname2/proxy/: tls qux (200; 11.330502ms) +Oct 27 14:41:18.574: INFO: (2) /api/v1/namespaces/proxy-1492/pods/https:proxy-service-n6q66-s9d4l:460/proxy/: tls baz (200; 11.505861ms) +Oct 27 14:41:18.574: INFO: (2) /api/v1/namespaces/proxy-1492/pods/proxy-service-n6q66-s9d4l:1080/proxy/: test<... (200; 11.522087ms) +Oct 27 14:41:18.574: INFO: (2) /api/v1/namespaces/proxy-1492/pods/http:proxy-service-n6q66-s9d4l:162/proxy/: bar (200; 11.353751ms) +Oct 27 14:41:18.575: INFO: (2) /api/v1/namespaces/proxy-1492/pods/https:proxy-service-n6q66-s9d4l:462/proxy/: tls qux (200; 12.258856ms) +Oct 27 14:41:18.575: INFO: (2) /api/v1/namespaces/proxy-1492/pods/proxy-service-n6q66-s9d4l/proxy/: test (200; 12.431985ms) +Oct 27 14:41:18.578: INFO: (2) /api/v1/namespaces/proxy-1492/services/http:proxy-service-n6q66:portname2/proxy/: bar (200; 14.785349ms) +Oct 27 14:41:18.578: INFO: (2) /api/v1/namespaces/proxy-1492/services/proxy-service-n6q66:portname2/proxy/: bar (200; 14.761091ms) +Oct 27 14:41:18.578: INFO: (2) /api/v1/namespaces/proxy-1492/services/http:proxy-service-n6q66:portname1/proxy/: foo (200; 14.771051ms) +Oct 27 14:41:18.578: INFO: (2) /api/v1/namespaces/proxy-1492/pods/proxy-service-n6q66-s9d4l:160/proxy/: foo (200; 14.734553ms) +Oct 27 14:41:18.590: INFO: (3) /api/v1/namespaces/proxy-1492/pods/https:proxy-service-n6q66-s9d4l:460/proxy/: tls baz (200; 11.474124ms) +Oct 27 14:41:18.590: INFO: (3) /api/v1/namespaces/proxy-1492/pods/http:proxy-service-n6q66-s9d4l:160/proxy/: foo (200; 11.530985ms) +Oct 27 14:41:18.590: INFO: (3) /api/v1/namespaces/proxy-1492/services/proxy-service-n6q66:portname2/proxy/: bar (200; 11.549209ms) +Oct 27 14:41:18.590: INFO: (3) /api/v1/namespaces/proxy-1492/pods/http:proxy-service-n6q66-s9d4l:162/proxy/: bar (200; 11.696449ms) +Oct 27 14:41:18.590: INFO: (3) /api/v1/namespaces/proxy-1492/pods/https:proxy-service-n6q66-s9d4l:443/proxy/: ... (200; 11.67857ms) +Oct 27 14:41:18.590: INFO: (3) /api/v1/namespaces/proxy-1492/services/https:proxy-service-n6q66:tlsportname2/proxy/: tls qux (200; 11.78725ms) +Oct 27 14:41:18.590: INFO: (3) /api/v1/namespaces/proxy-1492/pods/proxy-service-n6q66-s9d4l:1080/proxy/: test<... (200; 11.589682ms) +Oct 27 14:41:18.590: INFO: (3) /api/v1/namespaces/proxy-1492/pods/proxy-service-n6q66-s9d4l:160/proxy/: foo (200; 11.652244ms) +Oct 27 14:41:18.590: INFO: (3) /api/v1/namespaces/proxy-1492/services/http:proxy-service-n6q66:portname1/proxy/: foo (200; 12.167862ms) +Oct 27 14:41:18.590: INFO: (3) /api/v1/namespaces/proxy-1492/pods/proxy-service-n6q66-s9d4l/proxy/: test (200; 12.053497ms) +Oct 27 14:41:18.592: INFO: (3) /api/v1/namespaces/proxy-1492/services/https:proxy-service-n6q66:tlsportname1/proxy/: tls baz (200; 14.070617ms) +Oct 27 14:41:18.594: INFO: (3) /api/v1/namespaces/proxy-1492/pods/proxy-service-n6q66-s9d4l:162/proxy/: bar (200; 16.042057ms) +Oct 27 14:41:18.594: INFO: (3) /api/v1/namespaces/proxy-1492/services/http:proxy-service-n6q66:portname2/proxy/: bar (200; 16.098988ms) +Oct 27 14:41:18.594: INFO: (3) /api/v1/namespaces/proxy-1492/services/proxy-service-n6q66:portname1/proxy/: foo (200; 16.031542ms) +Oct 27 14:41:18.664: INFO: (4) /api/v1/namespaces/proxy-1492/pods/proxy-service-n6q66-s9d4l:1080/proxy/: test<... (200; 69.107467ms) +Oct 27 14:41:18.664: INFO: (4) /api/v1/namespaces/proxy-1492/services/https:proxy-service-n6q66:tlsportname2/proxy/: tls qux (200; 69.05834ms) +Oct 27 14:41:18.664: INFO: (4) /api/v1/namespaces/proxy-1492/pods/http:proxy-service-n6q66-s9d4l:160/proxy/: foo (200; 69.331991ms) +Oct 27 14:41:18.664: INFO: (4) /api/v1/namespaces/proxy-1492/pods/proxy-service-n6q66-s9d4l:162/proxy/: bar (200; 69.242032ms) +Oct 27 14:41:18.664: INFO: (4) /api/v1/namespaces/proxy-1492/pods/https:proxy-service-n6q66-s9d4l:443/proxy/: ... (200; 69.401908ms) +Oct 27 14:41:18.664: INFO: (4) /api/v1/namespaces/proxy-1492/pods/https:proxy-service-n6q66-s9d4l:462/proxy/: tls qux (200; 69.569444ms) +Oct 27 14:41:18.667: INFO: (4) /api/v1/namespaces/proxy-1492/pods/proxy-service-n6q66-s9d4l/proxy/: test (200; 72.550502ms) +Oct 27 14:41:18.667: INFO: (4) /api/v1/namespaces/proxy-1492/pods/proxy-service-n6q66-s9d4l:160/proxy/: foo (200; 72.504829ms) +Oct 27 14:41:18.667: INFO: (4) /api/v1/namespaces/proxy-1492/services/proxy-service-n6q66:portname2/proxy/: bar (200; 72.439509ms) +Oct 27 14:41:18.669: INFO: (4) /api/v1/namespaces/proxy-1492/services/proxy-service-n6q66:portname1/proxy/: foo (200; 75.101469ms) +Oct 27 14:41:18.669: INFO: (4) /api/v1/namespaces/proxy-1492/pods/http:proxy-service-n6q66-s9d4l:162/proxy/: bar (200; 74.957827ms) +Oct 27 14:41:18.669: INFO: (4) /api/v1/namespaces/proxy-1492/services/http:proxy-service-n6q66:portname1/proxy/: foo (200; 75.034883ms) +Oct 27 14:41:18.673: INFO: (4) /api/v1/namespaces/proxy-1492/services/http:proxy-service-n6q66:portname2/proxy/: bar (200; 78.829141ms) +Oct 27 14:41:18.687: INFO: (5) /api/v1/namespaces/proxy-1492/pods/proxy-service-n6q66-s9d4l:160/proxy/: foo (200; 12.680073ms) +Oct 27 14:41:18.687: INFO: (5) /api/v1/namespaces/proxy-1492/pods/https:proxy-service-n6q66-s9d4l:443/proxy/: test<... (200; 13.350465ms) +Oct 27 14:41:18.687: INFO: (5) /api/v1/namespaces/proxy-1492/pods/https:proxy-service-n6q66-s9d4l:460/proxy/: tls baz (200; 13.111159ms) +Oct 27 14:41:18.687: INFO: (5) /api/v1/namespaces/proxy-1492/services/proxy-service-n6q66:portname1/proxy/: foo (200; 13.082356ms) +Oct 27 14:41:18.687: INFO: (5) /api/v1/namespaces/proxy-1492/pods/http:proxy-service-n6q66-s9d4l:162/proxy/: bar (200; 13.274844ms) +Oct 27 14:41:18.687: INFO: (5) /api/v1/namespaces/proxy-1492/pods/proxy-service-n6q66-s9d4l/proxy/: test (200; 12.869222ms) +Oct 27 14:41:18.687: INFO: (5) /api/v1/namespaces/proxy-1492/pods/http:proxy-service-n6q66-s9d4l:1080/proxy/: ... (200; 12.821936ms) +Oct 27 14:41:18.687: INFO: (5) /api/v1/namespaces/proxy-1492/pods/proxy-service-n6q66-s9d4l:162/proxy/: bar (200; 13.603641ms) +Oct 27 14:41:18.687: INFO: (5) /api/v1/namespaces/proxy-1492/pods/https:proxy-service-n6q66-s9d4l:462/proxy/: tls qux (200; 12.951388ms) +Oct 27 14:41:18.688: INFO: (5) /api/v1/namespaces/proxy-1492/services/https:proxy-service-n6q66:tlsportname2/proxy/: tls qux (200; 13.193142ms) +Oct 27 14:41:18.692: INFO: (5) /api/v1/namespaces/proxy-1492/services/proxy-service-n6q66:portname2/proxy/: bar (200; 18.162574ms) +Oct 27 14:41:18.692: INFO: (5) /api/v1/namespaces/proxy-1492/services/http:proxy-service-n6q66:portname2/proxy/: bar (200; 17.425903ms) +Oct 27 14:41:18.692: INFO: (5) /api/v1/namespaces/proxy-1492/services/http:proxy-service-n6q66:portname1/proxy/: foo (200; 17.873915ms) +Oct 27 14:41:18.692: INFO: (5) /api/v1/namespaces/proxy-1492/pods/http:proxy-service-n6q66-s9d4l:160/proxy/: foo (200; 17.936841ms) +Oct 27 14:41:18.763: INFO: (6) /api/v1/namespaces/proxy-1492/pods/proxy-service-n6q66-s9d4l:162/proxy/: bar (200; 71.380657ms) +Oct 27 14:41:18.763: INFO: (6) /api/v1/namespaces/proxy-1492/services/http:proxy-service-n6q66:portname1/proxy/: foo (200; 71.409293ms) +Oct 27 14:41:18.764: INFO: (6) /api/v1/namespaces/proxy-1492/services/http:proxy-service-n6q66:portname2/proxy/: bar (200; 71.567592ms) +Oct 27 14:41:18.763: INFO: (6) /api/v1/namespaces/proxy-1492/pods/http:proxy-service-n6q66-s9d4l:1080/proxy/: ... (200; 71.434781ms) +Oct 27 14:41:18.764: INFO: (6) /api/v1/namespaces/proxy-1492/pods/proxy-service-n6q66-s9d4l/proxy/: test (200; 71.499311ms) +Oct 27 14:41:18.764: INFO: (6) /api/v1/namespaces/proxy-1492/pods/proxy-service-n6q66-s9d4l:1080/proxy/: test<... (200; 71.429467ms) +Oct 27 14:41:18.764: INFO: (6) /api/v1/namespaces/proxy-1492/services/https:proxy-service-n6q66:tlsportname1/proxy/: tls baz (200; 71.527772ms) +Oct 27 14:41:18.764: INFO: (6) /api/v1/namespaces/proxy-1492/pods/https:proxy-service-n6q66-s9d4l:443/proxy/: ... (200; 14.11345ms) +Oct 27 14:41:18.783: INFO: (7) /api/v1/namespaces/proxy-1492/pods/http:proxy-service-n6q66-s9d4l:162/proxy/: bar (200; 14.163934ms) +Oct 27 14:41:18.783: INFO: (7) /api/v1/namespaces/proxy-1492/pods/https:proxy-service-n6q66-s9d4l:443/proxy/: test (200; 14.202512ms) +Oct 27 14:41:18.783: INFO: (7) /api/v1/namespaces/proxy-1492/pods/proxy-service-n6q66-s9d4l:160/proxy/: foo (200; 14.176943ms) +Oct 27 14:41:18.783: INFO: (7) /api/v1/namespaces/proxy-1492/services/proxy-service-n6q66:portname1/proxy/: foo (200; 14.238297ms) +Oct 27 14:41:18.783: INFO: (7) /api/v1/namespaces/proxy-1492/services/https:proxy-service-n6q66:tlsportname1/proxy/: tls baz (200; 14.012381ms) +Oct 27 14:41:18.783: INFO: (7) /api/v1/namespaces/proxy-1492/pods/https:proxy-service-n6q66-s9d4l:462/proxy/: tls qux (200; 14.180882ms) +Oct 27 14:41:18.783: INFO: (7) /api/v1/namespaces/proxy-1492/pods/proxy-service-n6q66-s9d4l:1080/proxy/: test<... (200; 14.20302ms) +Oct 27 14:41:18.788: INFO: (7) /api/v1/namespaces/proxy-1492/services/http:proxy-service-n6q66:portname1/proxy/: foo (200; 18.709557ms) +Oct 27 14:41:18.788: INFO: (7) /api/v1/namespaces/proxy-1492/pods/http:proxy-service-n6q66-s9d4l:160/proxy/: foo (200; 18.668933ms) +Oct 27 14:41:18.788: INFO: (7) /api/v1/namespaces/proxy-1492/services/proxy-service-n6q66:portname2/proxy/: bar (200; 18.729235ms) +Oct 27 14:41:18.788: INFO: (7) /api/v1/namespaces/proxy-1492/services/http:proxy-service-n6q66:portname2/proxy/: bar (200; 18.713922ms) +Oct 27 14:41:18.799: INFO: (8) /api/v1/namespaces/proxy-1492/pods/proxy-service-n6q66-s9d4l:162/proxy/: bar (200; 11.078922ms) +Oct 27 14:41:18.799: INFO: (8) /api/v1/namespaces/proxy-1492/pods/http:proxy-service-n6q66-s9d4l:162/proxy/: bar (200; 11.146591ms) +Oct 27 14:41:18.799: INFO: (8) /api/v1/namespaces/proxy-1492/pods/https:proxy-service-n6q66-s9d4l:460/proxy/: tls baz (200; 11.276683ms) +Oct 27 14:41:18.799: INFO: (8) /api/v1/namespaces/proxy-1492/pods/http:proxy-service-n6q66-s9d4l:1080/proxy/: ... (200; 11.111071ms) +Oct 27 14:41:18.799: INFO: (8) /api/v1/namespaces/proxy-1492/pods/http:proxy-service-n6q66-s9d4l:160/proxy/: foo (200; 11.199551ms) +Oct 27 14:41:18.814: INFO: (8) /api/v1/namespaces/proxy-1492/pods/proxy-service-n6q66-s9d4l:160/proxy/: foo (200; 25.893302ms) +Oct 27 14:41:18.816: INFO: (8) /api/v1/namespaces/proxy-1492/pods/https:proxy-service-n6q66-s9d4l:443/proxy/: test<... (200; 28.037705ms) +Oct 27 14:41:18.816: INFO: (8) /api/v1/namespaces/proxy-1492/services/https:proxy-service-n6q66:tlsportname1/proxy/: tls baz (200; 28.027272ms) +Oct 27 14:41:18.816: INFO: (8) /api/v1/namespaces/proxy-1492/pods/https:proxy-service-n6q66-s9d4l:462/proxy/: tls qux (200; 28.04167ms) +Oct 27 14:41:18.818: INFO: (8) /api/v1/namespaces/proxy-1492/services/proxy-service-n6q66:portname2/proxy/: bar (200; 30.076029ms) +Oct 27 14:41:18.818: INFO: (8) /api/v1/namespaces/proxy-1492/services/http:proxy-service-n6q66:portname1/proxy/: foo (200; 30.047612ms) +Oct 27 14:41:18.863: INFO: (8) /api/v1/namespaces/proxy-1492/services/proxy-service-n6q66:portname1/proxy/: foo (200; 74.708305ms) +Oct 27 14:41:18.864: INFO: (8) /api/v1/namespaces/proxy-1492/pods/proxy-service-n6q66-s9d4l/proxy/: test (200; 76.568213ms) +Oct 27 14:41:18.909: INFO: (8) /api/v1/namespaces/proxy-1492/services/http:proxy-service-n6q66:portname2/proxy/: bar (200; 121.318505ms) +Oct 27 14:41:18.921: INFO: (9) /api/v1/namespaces/proxy-1492/pods/https:proxy-service-n6q66-s9d4l:460/proxy/: tls baz (200; 11.668361ms) +Oct 27 14:41:18.921: INFO: (9) /api/v1/namespaces/proxy-1492/services/https:proxy-service-n6q66:tlsportname2/proxy/: tls qux (200; 11.828894ms) +Oct 27 14:41:18.921: INFO: (9) /api/v1/namespaces/proxy-1492/pods/http:proxy-service-n6q66-s9d4l:162/proxy/: bar (200; 11.860683ms) +Oct 27 14:41:18.921: INFO: (9) /api/v1/namespaces/proxy-1492/pods/proxy-service-n6q66-s9d4l:160/proxy/: foo (200; 11.713933ms) +Oct 27 14:41:18.921: INFO: (9) /api/v1/namespaces/proxy-1492/pods/proxy-service-n6q66-s9d4l/proxy/: test (200; 11.719983ms) +Oct 27 14:41:18.921: INFO: (9) /api/v1/namespaces/proxy-1492/pods/proxy-service-n6q66-s9d4l:162/proxy/: bar (200; 11.936756ms) +Oct 27 14:41:18.921: INFO: (9) /api/v1/namespaces/proxy-1492/pods/https:proxy-service-n6q66-s9d4l:443/proxy/: ... (200; 12.097919ms) +Oct 27 14:41:18.922: INFO: (9) /api/v1/namespaces/proxy-1492/pods/proxy-service-n6q66-s9d4l:1080/proxy/: test<... (200; 11.955304ms) +Oct 27 14:41:18.922: INFO: (9) /api/v1/namespaces/proxy-1492/pods/https:proxy-service-n6q66-s9d4l:462/proxy/: tls qux (200; 11.855583ms) +Oct 27 14:41:18.922: INFO: (9) /api/v1/namespaces/proxy-1492/pods/http:proxy-service-n6q66-s9d4l:160/proxy/: foo (200; 11.903571ms) +Oct 27 14:41:18.927: INFO: (9) /api/v1/namespaces/proxy-1492/services/proxy-service-n6q66:portname2/proxy/: bar (200; 17.030867ms) +Oct 27 14:41:18.927: INFO: (9) /api/v1/namespaces/proxy-1492/services/proxy-service-n6q66:portname1/proxy/: foo (200; 16.892979ms) +Oct 27 14:41:18.927: INFO: (9) /api/v1/namespaces/proxy-1492/services/http:proxy-service-n6q66:portname1/proxy/: foo (200; 17.005228ms) +Oct 27 14:41:18.927: INFO: (9) /api/v1/namespaces/proxy-1492/services/http:proxy-service-n6q66:portname2/proxy/: bar (200; 17.022827ms) +Oct 27 14:41:18.937: INFO: (10) /api/v1/namespaces/proxy-1492/services/http:proxy-service-n6q66:portname2/proxy/: bar (200; 10.452613ms) +Oct 27 14:41:18.937: INFO: (10) /api/v1/namespaces/proxy-1492/services/https:proxy-service-n6q66:tlsportname2/proxy/: tls qux (200; 10.568427ms) +Oct 27 14:41:18.937: INFO: (10) /api/v1/namespaces/proxy-1492/pods/http:proxy-service-n6q66-s9d4l:1080/proxy/: ... (200; 10.52773ms) +Oct 27 14:41:18.938: INFO: (10) /api/v1/namespaces/proxy-1492/pods/proxy-service-n6q66-s9d4l:1080/proxy/: test<... (200; 10.597672ms) +Oct 27 14:41:18.937: INFO: (10) /api/v1/namespaces/proxy-1492/pods/proxy-service-n6q66-s9d4l/proxy/: test (200; 10.506ms) +Oct 27 14:41:18.938: INFO: (10) /api/v1/namespaces/proxy-1492/services/proxy-service-n6q66:portname1/proxy/: foo (200; 10.74065ms) +Oct 27 14:41:18.938: INFO: (10) /api/v1/namespaces/proxy-1492/pods/proxy-service-n6q66-s9d4l:162/proxy/: bar (200; 10.593885ms) +Oct 27 14:41:18.938: INFO: (10) /api/v1/namespaces/proxy-1492/pods/https:proxy-service-n6q66-s9d4l:443/proxy/: test (200; 73.036761ms) +Oct 27 14:41:19.016: INFO: (11) /api/v1/namespaces/proxy-1492/services/https:proxy-service-n6q66:tlsportname1/proxy/: tls baz (200; 73.032468ms) +Oct 27 14:41:19.016: INFO: (11) /api/v1/namespaces/proxy-1492/pods/https:proxy-service-n6q66-s9d4l:443/proxy/: ... (200; 73.09169ms) +Oct 27 14:41:19.016: INFO: (11) /api/v1/namespaces/proxy-1492/pods/proxy-service-n6q66-s9d4l:160/proxy/: foo (200; 73.065697ms) +Oct 27 14:41:19.016: INFO: (11) /api/v1/namespaces/proxy-1492/pods/https:proxy-service-n6q66-s9d4l:460/proxy/: tls baz (200; 73.046333ms) +Oct 27 14:41:19.016: INFO: (11) /api/v1/namespaces/proxy-1492/pods/http:proxy-service-n6q66-s9d4l:160/proxy/: foo (200; 73.175371ms) +Oct 27 14:41:19.016: INFO: (11) /api/v1/namespaces/proxy-1492/pods/http:proxy-service-n6q66-s9d4l:162/proxy/: bar (200; 73.280144ms) +Oct 27 14:41:19.017: INFO: (11) /api/v1/namespaces/proxy-1492/pods/proxy-service-n6q66-s9d4l:1080/proxy/: test<... (200; 73.630544ms) +Oct 27 14:41:19.017: INFO: (11) /api/v1/namespaces/proxy-1492/services/https:proxy-service-n6q66:tlsportname2/proxy/: tls qux (200; 73.800611ms) +Oct 27 14:41:19.017: INFO: (11) /api/v1/namespaces/proxy-1492/pods/proxy-service-n6q66-s9d4l:162/proxy/: bar (200; 73.772277ms) +Oct 27 14:41:19.017: INFO: (11) /api/v1/namespaces/proxy-1492/pods/https:proxy-service-n6q66-s9d4l:462/proxy/: tls qux (200; 73.944258ms) +Oct 27 14:41:19.019: INFO: (11) /api/v1/namespaces/proxy-1492/services/proxy-service-n6q66:portname1/proxy/: foo (200; 76.336343ms) +Oct 27 14:41:19.019: INFO: (11) /api/v1/namespaces/proxy-1492/services/http:proxy-service-n6q66:portname1/proxy/: foo (200; 76.248953ms) +Oct 27 14:41:19.019: INFO: (11) /api/v1/namespaces/proxy-1492/services/proxy-service-n6q66:portname2/proxy/: bar (200; 76.303233ms) +Oct 27 14:41:19.019: INFO: (11) /api/v1/namespaces/proxy-1492/services/http:proxy-service-n6q66:portname2/proxy/: bar (200; 76.343807ms) +Oct 27 14:41:19.032: INFO: (12) /api/v1/namespaces/proxy-1492/services/https:proxy-service-n6q66:tlsportname2/proxy/: tls qux (200; 12.090166ms) +Oct 27 14:41:19.032: INFO: (12) /api/v1/namespaces/proxy-1492/pods/https:proxy-service-n6q66-s9d4l:462/proxy/: tls qux (200; 12.246786ms) +Oct 27 14:41:19.032: INFO: (12) /api/v1/namespaces/proxy-1492/pods/https:proxy-service-n6q66-s9d4l:443/proxy/: test (200; 12.255929ms) +Oct 27 14:41:19.032: INFO: (12) /api/v1/namespaces/proxy-1492/pods/proxy-service-n6q66-s9d4l:162/proxy/: bar (200; 12.211303ms) +Oct 27 14:41:19.032: INFO: (12) /api/v1/namespaces/proxy-1492/services/https:proxy-service-n6q66:tlsportname1/proxy/: tls baz (200; 12.345355ms) +Oct 27 14:41:19.032: INFO: (12) /api/v1/namespaces/proxy-1492/pods/proxy-service-n6q66-s9d4l:1080/proxy/: test<... (200; 12.303914ms) +Oct 27 14:41:19.032: INFO: (12) /api/v1/namespaces/proxy-1492/pods/http:proxy-service-n6q66-s9d4l:1080/proxy/: ... (200; 12.29218ms) +Oct 27 14:41:19.032: INFO: (12) /api/v1/namespaces/proxy-1492/pods/https:proxy-service-n6q66-s9d4l:460/proxy/: tls baz (200; 12.303535ms) +Oct 27 14:41:19.032: INFO: (12) /api/v1/namespaces/proxy-1492/pods/http:proxy-service-n6q66-s9d4l:160/proxy/: foo (200; 12.441635ms) +Oct 27 14:41:19.032: INFO: (12) /api/v1/namespaces/proxy-1492/pods/proxy-service-n6q66-s9d4l:160/proxy/: foo (200; 12.424137ms) +Oct 27 14:41:19.037: INFO: (12) /api/v1/namespaces/proxy-1492/pods/http:proxy-service-n6q66-s9d4l:162/proxy/: bar (200; 17.434226ms) +Oct 27 14:41:19.037: INFO: (12) /api/v1/namespaces/proxy-1492/services/http:proxy-service-n6q66:portname1/proxy/: foo (200; 17.397454ms) +Oct 27 14:41:19.037: INFO: (12) /api/v1/namespaces/proxy-1492/services/http:proxy-service-n6q66:portname2/proxy/: bar (200; 17.548997ms) +Oct 27 14:41:19.037: INFO: (12) /api/v1/namespaces/proxy-1492/services/proxy-service-n6q66:portname1/proxy/: foo (200; 17.447141ms) +Oct 27 14:41:19.049: INFO: (13) /api/v1/namespaces/proxy-1492/pods/http:proxy-service-n6q66-s9d4l:1080/proxy/: ... (200; 12.283832ms) +Oct 27 14:41:19.049: INFO: (13) /api/v1/namespaces/proxy-1492/pods/proxy-service-n6q66-s9d4l:160/proxy/: foo (200; 12.197793ms) +Oct 27 14:41:19.049: INFO: (13) /api/v1/namespaces/proxy-1492/pods/http:proxy-service-n6q66-s9d4l:162/proxy/: bar (200; 12.435357ms) +Oct 27 14:41:19.049: INFO: (13) /api/v1/namespaces/proxy-1492/pods/https:proxy-service-n6q66-s9d4l:462/proxy/: tls qux (200; 12.265749ms) +Oct 27 14:41:19.049: INFO: (13) /api/v1/namespaces/proxy-1492/services/https:proxy-service-n6q66:tlsportname1/proxy/: tls baz (200; 12.176667ms) +Oct 27 14:41:19.049: INFO: (13) /api/v1/namespaces/proxy-1492/pods/proxy-service-n6q66-s9d4l:1080/proxy/: test<... (200; 12.261399ms) +Oct 27 14:41:19.049: INFO: (13) /api/v1/namespaces/proxy-1492/pods/https:proxy-service-n6q66-s9d4l:443/proxy/: test (200; 28.749412ms) +Oct 27 14:41:19.115: INFO: (13) /api/v1/namespaces/proxy-1492/services/http:proxy-service-n6q66:portname1/proxy/: foo (200; 77.385124ms) +Oct 27 14:41:19.115: INFO: (13) /api/v1/namespaces/proxy-1492/services/proxy-service-n6q66:portname1/proxy/: foo (200; 77.450778ms) +Oct 27 14:41:19.115: INFO: (13) /api/v1/namespaces/proxy-1492/services/proxy-service-n6q66:portname2/proxy/: bar (200; 77.495231ms) +Oct 27 14:41:19.116: INFO: (13) /api/v1/namespaces/proxy-1492/services/http:proxy-service-n6q66:portname2/proxy/: bar (200; 78.697373ms) +Oct 27 14:41:19.132: INFO: (14) /api/v1/namespaces/proxy-1492/pods/https:proxy-service-n6q66-s9d4l:443/proxy/: ... (200; 43.603918ms) +Oct 27 14:41:19.160: INFO: (14) /api/v1/namespaces/proxy-1492/pods/https:proxy-service-n6q66-s9d4l:462/proxy/: tls qux (200; 43.695984ms) +Oct 27 14:41:19.160: INFO: (14) /api/v1/namespaces/proxy-1492/pods/proxy-service-n6q66-s9d4l/proxy/: test (200; 43.673048ms) +Oct 27 14:41:19.160: INFO: (14) /api/v1/namespaces/proxy-1492/services/https:proxy-service-n6q66:tlsportname2/proxy/: tls qux (200; 43.655255ms) +Oct 27 14:41:19.160: INFO: (14) /api/v1/namespaces/proxy-1492/services/http:proxy-service-n6q66:portname1/proxy/: foo (200; 43.508967ms) +Oct 27 14:41:19.160: INFO: (14) /api/v1/namespaces/proxy-1492/services/https:proxy-service-n6q66:tlsportname1/proxy/: tls baz (200; 43.749269ms) +Oct 27 14:41:19.160: INFO: (14) /api/v1/namespaces/proxy-1492/pods/proxy-service-n6q66-s9d4l:162/proxy/: bar (200; 43.762429ms) +Oct 27 14:41:19.160: INFO: (14) /api/v1/namespaces/proxy-1492/services/proxy-service-n6q66:portname1/proxy/: foo (200; 43.583484ms) +Oct 27 14:41:19.160: INFO: (14) /api/v1/namespaces/proxy-1492/pods/https:proxy-service-n6q66-s9d4l:460/proxy/: tls baz (200; 43.625811ms) +Oct 27 14:41:19.160: INFO: (14) /api/v1/namespaces/proxy-1492/pods/proxy-service-n6q66-s9d4l:1080/proxy/: test<... (200; 43.578407ms) +Oct 27 14:41:19.165: INFO: (14) /api/v1/namespaces/proxy-1492/pods/proxy-service-n6q66-s9d4l:160/proxy/: foo (200; 48.755045ms) +Oct 27 14:41:19.165: INFO: (14) /api/v1/namespaces/proxy-1492/pods/http:proxy-service-n6q66-s9d4l:160/proxy/: foo (200; 48.882795ms) +Oct 27 14:41:19.165: INFO: (14) /api/v1/namespaces/proxy-1492/services/http:proxy-service-n6q66:portname2/proxy/: bar (200; 48.881683ms) +Oct 27 14:41:19.176: INFO: (15) /api/v1/namespaces/proxy-1492/pods/proxy-service-n6q66-s9d4l/proxy/: test (200; 10.702591ms) +Oct 27 14:41:19.259: INFO: (15) /api/v1/namespaces/proxy-1492/pods/https:proxy-service-n6q66-s9d4l:460/proxy/: tls baz (200; 93.705704ms) +Oct 27 14:41:19.259: INFO: (15) /api/v1/namespaces/proxy-1492/pods/proxy-service-n6q66-s9d4l:1080/proxy/: test<... (200; 93.754462ms) +Oct 27 14:41:19.260: INFO: (15) /api/v1/namespaces/proxy-1492/pods/https:proxy-service-n6q66-s9d4l:443/proxy/: ... (200; 94.565732ms) +Oct 27 14:41:19.260: INFO: (15) /api/v1/namespaces/proxy-1492/pods/proxy-service-n6q66-s9d4l:162/proxy/: bar (200; 94.535965ms) +Oct 27 14:41:19.260: INFO: (15) /api/v1/namespaces/proxy-1492/pods/proxy-service-n6q66-s9d4l:160/proxy/: foo (200; 94.658265ms) +Oct 27 14:41:19.265: INFO: (15) /api/v1/namespaces/proxy-1492/services/http:proxy-service-n6q66:portname2/proxy/: bar (200; 100.149286ms) +Oct 27 14:41:19.272: INFO: (15) /api/v1/namespaces/proxy-1492/services/http:proxy-service-n6q66:portname1/proxy/: foo (200; 106.980143ms) +Oct 27 14:41:19.359: INFO: (15) /api/v1/namespaces/proxy-1492/services/https:proxy-service-n6q66:tlsportname1/proxy/: tls baz (200; 194.060493ms) +Oct 27 14:41:19.359: INFO: (15) /api/v1/namespaces/proxy-1492/services/proxy-service-n6q66:portname2/proxy/: bar (200; 194.16789ms) +Oct 27 14:41:19.359: INFO: (15) /api/v1/namespaces/proxy-1492/services/proxy-service-n6q66:portname1/proxy/: foo (200; 194.330818ms) +Oct 27 14:41:19.359: INFO: (15) /api/v1/namespaces/proxy-1492/services/https:proxy-service-n6q66:tlsportname2/proxy/: tls qux (200; 194.173309ms) +Oct 27 14:41:19.360: INFO: (15) /api/v1/namespaces/proxy-1492/pods/http:proxy-service-n6q66-s9d4l:162/proxy/: bar (200; 194.433109ms) +Oct 27 14:41:19.371: INFO: (16) /api/v1/namespaces/proxy-1492/pods/proxy-service-n6q66-s9d4l:162/proxy/: bar (200; 10.832805ms) +Oct 27 14:41:19.374: INFO: (16) /api/v1/namespaces/proxy-1492/pods/proxy-service-n6q66-s9d4l/proxy/: test (200; 13.800806ms) +Oct 27 14:41:19.374: INFO: (16) /api/v1/namespaces/proxy-1492/pods/https:proxy-service-n6q66-s9d4l:443/proxy/: test<... (200; 13.84293ms) +Oct 27 14:41:19.374: INFO: (16) /api/v1/namespaces/proxy-1492/pods/http:proxy-service-n6q66-s9d4l:1080/proxy/: ... (200; 13.895034ms) +Oct 27 14:41:19.374: INFO: (16) /api/v1/namespaces/proxy-1492/pods/http:proxy-service-n6q66-s9d4l:162/proxy/: bar (200; 13.972554ms) +Oct 27 14:41:19.374: INFO: (16) /api/v1/namespaces/proxy-1492/pods/proxy-service-n6q66-s9d4l:160/proxy/: foo (200; 13.899217ms) +Oct 27 14:41:19.375: INFO: (16) /api/v1/namespaces/proxy-1492/services/proxy-service-n6q66:portname2/proxy/: bar (200; 14.928011ms) +Oct 27 14:41:19.378: INFO: (16) /api/v1/namespaces/proxy-1492/services/http:proxy-service-n6q66:portname1/proxy/: foo (200; 18.723572ms) +Oct 27 14:41:19.379: INFO: (16) /api/v1/namespaces/proxy-1492/services/http:proxy-service-n6q66:portname2/proxy/: bar (200; 18.733118ms) +Oct 27 14:41:19.379: INFO: (16) /api/v1/namespaces/proxy-1492/services/proxy-service-n6q66:portname1/proxy/: foo (200; 18.805597ms) +Oct 27 14:41:19.463: INFO: (17) /api/v1/namespaces/proxy-1492/pods/http:proxy-service-n6q66-s9d4l:160/proxy/: foo (200; 84.437797ms) +Oct 27 14:41:19.463: INFO: (17) /api/v1/namespaces/proxy-1492/pods/http:proxy-service-n6q66-s9d4l:1080/proxy/: ... (200; 84.613911ms) +Oct 27 14:41:19.463: INFO: (17) /api/v1/namespaces/proxy-1492/pods/proxy-service-n6q66-s9d4l/proxy/: test (200; 84.408182ms) +Oct 27 14:41:19.463: INFO: (17) /api/v1/namespaces/proxy-1492/pods/proxy-service-n6q66-s9d4l:1080/proxy/: test<... (200; 84.405916ms) +Oct 27 14:41:19.463: INFO: (17) /api/v1/namespaces/proxy-1492/pods/http:proxy-service-n6q66-s9d4l:162/proxy/: bar (200; 84.436189ms) +Oct 27 14:41:19.465: INFO: (17) /api/v1/namespaces/proxy-1492/pods/proxy-service-n6q66-s9d4l:162/proxy/: bar (200; 85.819829ms) +Oct 27 14:41:19.466: INFO: (17) /api/v1/namespaces/proxy-1492/pods/proxy-service-n6q66-s9d4l:160/proxy/: foo (200; 87.119406ms) +Oct 27 14:41:19.466: INFO: (17) /api/v1/namespaces/proxy-1492/services/https:proxy-service-n6q66:tlsportname1/proxy/: tls baz (200; 87.197977ms) +Oct 27 14:41:19.466: INFO: (17) /api/v1/namespaces/proxy-1492/pods/https:proxy-service-n6q66-s9d4l:462/proxy/: tls qux (200; 87.087878ms) +Oct 27 14:41:19.466: INFO: (17) /api/v1/namespaces/proxy-1492/pods/https:proxy-service-n6q66-s9d4l:460/proxy/: tls baz (200; 87.207939ms) +Oct 27 14:41:19.466: INFO: (17) /api/v1/namespaces/proxy-1492/pods/https:proxy-service-n6q66-s9d4l:443/proxy/: test (200; 16.536395ms) +Oct 27 14:41:19.488: INFO: (18) /api/v1/namespaces/proxy-1492/pods/http:proxy-service-n6q66-s9d4l:162/proxy/: bar (200; 16.790456ms) +Oct 27 14:41:19.488: INFO: (18) /api/v1/namespaces/proxy-1492/services/proxy-service-n6q66:portname2/proxy/: bar (200; 16.711597ms) +Oct 27 14:41:19.488: INFO: (18) /api/v1/namespaces/proxy-1492/pods/https:proxy-service-n6q66-s9d4l:460/proxy/: tls baz (200; 16.766598ms) +Oct 27 14:41:19.488: INFO: (18) /api/v1/namespaces/proxy-1492/pods/proxy-service-n6q66-s9d4l:160/proxy/: foo (200; 16.586326ms) +Oct 27 14:41:19.488: INFO: (18) /api/v1/namespaces/proxy-1492/pods/https:proxy-service-n6q66-s9d4l:443/proxy/: ... (200; 16.82409ms) +Oct 27 14:41:19.488: INFO: (18) /api/v1/namespaces/proxy-1492/pods/proxy-service-n6q66-s9d4l:1080/proxy/: test<... (200; 16.708025ms) +Oct 27 14:41:19.488: INFO: (18) /api/v1/namespaces/proxy-1492/pods/https:proxy-service-n6q66-s9d4l:462/proxy/: tls qux (200; 16.605659ms) +Oct 27 14:41:19.488: INFO: (18) /api/v1/namespaces/proxy-1492/services/https:proxy-service-n6q66:tlsportname1/proxy/: tls baz (200; 16.702109ms) +Oct 27 14:41:19.493: INFO: (18) /api/v1/namespaces/proxy-1492/pods/proxy-service-n6q66-s9d4l:162/proxy/: bar (200; 21.415632ms) +Oct 27 14:41:19.493: INFO: (18) /api/v1/namespaces/proxy-1492/services/http:proxy-service-n6q66:portname2/proxy/: bar (200; 21.358181ms) +Oct 27 14:41:19.493: INFO: (18) /api/v1/namespaces/proxy-1492/pods/http:proxy-service-n6q66-s9d4l:160/proxy/: foo (200; 21.355857ms) +Oct 27 14:41:19.493: INFO: (18) /api/v1/namespaces/proxy-1492/services/http:proxy-service-n6q66:portname1/proxy/: foo (200; 21.347224ms) +Oct 27 14:41:19.504: INFO: (19) /api/v1/namespaces/proxy-1492/pods/https:proxy-service-n6q66-s9d4l:460/proxy/: tls baz (200; 11.177972ms) +Oct 27 14:41:19.504: INFO: (19) /api/v1/namespaces/proxy-1492/pods/https:proxy-service-n6q66-s9d4l:462/proxy/: tls qux (200; 11.169004ms) +Oct 27 14:41:19.504: INFO: (19) /api/v1/namespaces/proxy-1492/pods/http:proxy-service-n6q66-s9d4l:160/proxy/: foo (200; 11.263163ms) +Oct 27 14:41:19.504: INFO: (19) /api/v1/namespaces/proxy-1492/pods/proxy-service-n6q66-s9d4l:162/proxy/: bar (200; 11.247846ms) +Oct 27 14:41:19.504: INFO: (19) /api/v1/namespaces/proxy-1492/pods/http:proxy-service-n6q66-s9d4l:1080/proxy/: ... (200; 11.268225ms) +Oct 27 14:41:19.504: INFO: (19) /api/v1/namespaces/proxy-1492/pods/proxy-service-n6q66-s9d4l:1080/proxy/: test<... (200; 11.355543ms) +Oct 27 14:41:19.504: INFO: (19) /api/v1/namespaces/proxy-1492/pods/proxy-service-n6q66-s9d4l:160/proxy/: foo (200; 11.241857ms) +Oct 27 14:41:19.504: INFO: (19) /api/v1/namespaces/proxy-1492/pods/proxy-service-n6q66-s9d4l/proxy/: test (200; 11.337469ms) +Oct 27 14:41:19.559: INFO: (19) /api/v1/namespaces/proxy-1492/services/proxy-service-n6q66:portname1/proxy/: foo (200; 66.019799ms) +Oct 27 14:41:19.559: INFO: (19) /api/v1/namespaces/proxy-1492/services/http:proxy-service-n6q66:portname2/proxy/: bar (200; 66.102004ms) +Oct 27 14:41:19.559: INFO: (19) /api/v1/namespaces/proxy-1492/pods/http:proxy-service-n6q66-s9d4l:162/proxy/: bar (200; 65.927043ms) +Oct 27 14:41:19.559: INFO: (19) /api/v1/namespaces/proxy-1492/services/https:proxy-service-n6q66:tlsportname2/proxy/: tls qux (200; 65.873825ms) +Oct 27 14:41:19.559: INFO: (19) /api/v1/namespaces/proxy-1492/services/http:proxy-service-n6q66:portname1/proxy/: foo (200; 66.054677ms) +Oct 27 14:41:19.559: INFO: (19) /api/v1/namespaces/proxy-1492/services/https:proxy-service-n6q66:tlsportname1/proxy/: tls baz (200; 65.994548ms) +Oct 27 14:41:19.559: INFO: (19) /api/v1/namespaces/proxy-1492/pods/https:proxy-service-n6q66-s9d4l:443/proxy/: >> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename taint-single-pod +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in taint-single-pod-6747 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/taints.go:164 +Oct 27 14:41:21.294: INFO: Waiting up to 1m0s for all nodes to be ready +Oct 27 14:42:21.338: INFO: Waiting for terminating namespaces to be deleted... +[It] removing taint cancels eviction [Disruptive] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:42:21.343: INFO: Starting informer... +STEP: Starting pod... +Oct 27 14:42:21.566: INFO: Pod is running on izgw89f23rpcwrl79tpgp1z. Tainting Node +STEP: Trying to apply a taint on the Node +STEP: verifying the node has the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute +STEP: Waiting short time to make sure Pod is queued for deletion +Oct 27 14:42:21.587: INFO: Pod wasn't evicted. Proceeding +Oct 27 14:42:21.587: INFO: Removing taint from Node +STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute +STEP: Waiting some time to make sure that toleration time passed. +Oct 27 14:43:36.608: INFO: Pod wasn't evicted. Test successful +[AfterEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:43:36.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "taint-single-pod-6747" for this suite. +•{"msg":"PASSED [sig-node] NoExecuteTaintManager Single Pod [Serial] removing taint cancels eviction [Disruptive] [Conformance]","total":346,"completed":185,"skipped":3267,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + binary data should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:43:36.621: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-6648 +STEP: Waiting for a default service account to be provisioned in namespace +[It] binary data should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name configmap-test-upd-bee1ec28-c3a4-497c-8a1c-82406e8dd9cc +STEP: Creating the pod +STEP: Waiting for pod with text data +STEP: Waiting for pod with binary data +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:43:38.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-6648" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":186,"skipped":3292,"failed":0} +S +------------------------------ +[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook + should execute prestop http hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Container Lifecycle Hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:43:38.881: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-lifecycle-hook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-lifecycle-hook-3535 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] when create a pod with lifecycle hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 +STEP: create the container to handle the HTTPGet hook request. +Oct 27 14:43:39.046: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:43:41.052: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) +[It] should execute prestop http hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the pod with lifecycle hook +Oct 27 14:43:41.073: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:43:43.079: INFO: The status of Pod pod-with-prestop-http-hook is Running (Ready = true) +STEP: delete the pod with lifecycle hook +Oct 27 14:43:43.090: INFO: Waiting for pod pod-with-prestop-http-hook to disappear +Oct 27 14:43:43.100: INFO: Pod pod-with-prestop-http-hook still exists +Oct 27 14:43:45.100: INFO: Waiting for pod pod-with-prestop-http-hook to disappear +Oct 27 14:43:45.106: INFO: Pod pod-with-prestop-http-hook still exists +Oct 27 14:43:47.101: INFO: Waiting for pod pod-with-prestop-http-hook to disappear +Oct 27 14:43:47.106: INFO: Pod pod-with-prestop-http-hook no longer exists +STEP: check prestop hook +[AfterEach] [sig-node] Container Lifecycle Hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:43:47.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-lifecycle-hook-3535" for this suite. +•{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":346,"completed":187,"skipped":3293,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition + getting/updating/patching custom resource definition status sub-resource works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:43:47.132: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename custom-resource-definition +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in custom-resource-definition-7179 +STEP: Waiting for a default service account to be provisioned in namespace +[It] getting/updating/patching custom resource definition status sub-resource works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:43:47.281: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:43:47.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "custom-resource-definition-7179" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":346,"completed":188,"skipped":3307,"failed":0} +S +------------------------------ +[sig-storage] Projected downwardAPI + should update labels on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:43:47.971: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-4230 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should update labels on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating the pod +Oct 27 14:43:48.379: INFO: The status of Pod labelsupdatef18e465a-047d-4e02-b927-bbfb79832b33 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:43:50.385: INFO: The status of Pod labelsupdatef18e465a-047d-4e02-b927-bbfb79832b33 is Running (Ready = true) +Oct 27 14:43:50.915: INFO: Successfully updated pod "labelsupdatef18e465a-047d-4e02-b927-bbfb79832b33" +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:43:54.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-4230" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":346,"completed":189,"skipped":3308,"failed":0} +S +------------------------------ +[sig-cli] Kubectl client Kubectl replace + should update a single-container pod's image [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:43:54.967: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-7761 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[BeforeEach] Kubectl replace + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1558 +[It] should update a single-container pod's image [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 +Oct 27 14:43:55.114: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-7761 run e2e-test-httpd-pod --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 --pod-running-timeout=2m0s --labels=run=e2e-test-httpd-pod' +Oct 27 14:43:55.499: INFO: stderr: "" +Oct 27 14:43:55.499: INFO: stdout: "pod/e2e-test-httpd-pod created\n" +STEP: verifying the pod e2e-test-httpd-pod is running +STEP: verifying the pod e2e-test-httpd-pod was created +Oct 27 14:44:00.551: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-7761 get pod e2e-test-httpd-pod -o json' +Oct 27 14:44:00.622: INFO: stderr: "" +Oct 27 14:44:00.622: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"annotations\": {\n \"cni.projectcalico.org/containerID\": \"f4b1d7775a6bf6c37c7d6da76fbf5a4bc5196285e87e8566f4b6c352711172bd\",\n \"cni.projectcalico.org/podIP\": \"172.16.1.204/32\",\n \"cni.projectcalico.org/podIPs\": \"172.16.1.204/32\",\n \"kubernetes.io/psp\": \"e2e-test-privileged-psp\"\n },\n \"creationTimestamp\": \"2021-10-27T14:43:55Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-7761\",\n \"resourceVersion\": \"22744\",\n \"uid\": \"de28b305-c226-4e3e-a24d-cdd5986bf045\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"env\": [\n {\n \"name\": \"KUBERNETES_SERVICE_HOST\",\n \"value\": \"api.tmanu-jzf.it.internal.staging.k8s.ondemand.com\"\n }\n ],\n \"image\": \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"kube-api-access-wtg9x\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"izgw89f23rpcwrl79tpgp1z\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"kube-api-access-wtg9x\",\n \"projected\": {\n \"defaultMode\": 420,\n \"sources\": [\n {\n \"serviceAccountToken\": {\n \"expirationSeconds\": 3607,\n \"path\": \"token\"\n }\n },\n {\n \"configMap\": {\n \"items\": [\n {\n \"key\": \"ca.crt\",\n \"path\": \"ca.crt\"\n }\n ],\n \"name\": \"kube-root-ca.crt\"\n }\n },\n {\n \"downwardAPI\": {\n \"items\": [\n {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"metadata.namespace\"\n },\n \"path\": \"namespace\"\n }\n ]\n }\n }\n ]\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-10-27T14:43:55Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-10-27T14:43:56Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-10-27T14:43:56Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-10-27T14:43:55Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://eb83f91149e83f85c636cc67f99d88674ea2978b63a22f394a7b302dc771ad3d\",\n \"image\": \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\",\n \"imageID\": \"k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2021-10-27T14:43:56Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.250.8.35\",\n \"phase\": \"Running\",\n \"podIP\": \"172.16.1.204\",\n \"podIPs\": [\n {\n \"ip\": \"172.16.1.204\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2021-10-27T14:43:55Z\"\n }\n}\n" +STEP: replace the image in the pod +Oct 27 14:44:00.622: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-7761 replace -f -' +Oct 27 14:44:00.793: INFO: stderr: "" +Oct 27 14:44:00.793: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" +STEP: verifying the pod e2e-test-httpd-pod has the right image k8s.gcr.io/e2e-test-images/busybox:1.29-1 +[AfterEach] Kubectl replace + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562 +Oct 27 14:44:00.798: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-7761 delete pods e2e-test-httpd-pod' +Oct 27 14:44:02.471: INFO: stderr: "" +Oct 27 14:44:02.471: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:44:02.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-7761" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":346,"completed":190,"skipped":3309,"failed":0} +SSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:44:02.484: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-9372 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0644 on tmpfs +Oct 27 14:44:02.647: INFO: Waiting up to 5m0s for pod "pod-db682ab2-3dcf-42c9-b2e1-b992a0c71386" in namespace "emptydir-9372" to be "Succeeded or Failed" +Oct 27 14:44:02.652: INFO: Pod "pod-db682ab2-3dcf-42c9-b2e1-b992a0c71386": Phase="Pending", Reason="", readiness=false. Elapsed: 4.634198ms +Oct 27 14:44:04.658: INFO: Pod "pod-db682ab2-3dcf-42c9-b2e1-b992a0c71386": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010228571s +STEP: Saw pod success +Oct 27 14:44:04.658: INFO: Pod "pod-db682ab2-3dcf-42c9-b2e1-b992a0c71386" satisfied condition "Succeeded or Failed" +Oct 27 14:44:04.662: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod pod-db682ab2-3dcf-42c9-b2e1-b992a0c71386 container test-container: +STEP: delete the pod +Oct 27 14:44:04.722: INFO: Waiting for pod pod-db682ab2-3dcf-42c9-b2e1-b992a0c71386 to disappear +Oct 27 14:44:04.726: INFO: Pod pod-db682ab2-3dcf-42c9-b2e1-b992a0c71386 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:44:04.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-9372" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":191,"skipped":3314,"failed":0} +SSS +------------------------------ +[sig-apps] CronJob + should schedule multiple jobs concurrently [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:44:04.739: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename cronjob +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in cronjob-2152 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should schedule multiple jobs concurrently [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a cronjob +STEP: Ensuring more than one job is running at a time +STEP: Ensuring at least two running jobs exists by listing jobs explicitly +STEP: Removing cronjob +[AfterEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:46:00.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "cronjob-2152" for this suite. +•{"msg":"PASSED [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","total":346,"completed":192,"skipped":3317,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicationController + should adopt matching pods on creation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:46:00.925: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename replication-controller +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replication-controller-6578 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 +[It] should adopt matching pods on creation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Given a Pod with a 'name' label pod-adoption is created +Oct 27 14:46:01.091: INFO: The status of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:46:03.098: INFO: The status of Pod pod-adoption is Running (Ready = true) +STEP: When a replication controller with a matching selector is created +STEP: Then the orphan pod is adopted +[AfterEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:46:04.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replication-controller-6578" for this suite. +•{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":346,"completed":193,"skipped":3330,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide podname only [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:46:04.132: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-3699 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should provide podname only [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 27 14:46:04.294: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1cb75600-ced3-48f6-bb3a-fd6d0e141358" in namespace "downward-api-3699" to be "Succeeded or Failed" +Oct 27 14:46:04.298: INFO: Pod "downwardapi-volume-1cb75600-ced3-48f6-bb3a-fd6d0e141358": Phase="Pending", Reason="", readiness=false. Elapsed: 4.513264ms +Oct 27 14:46:06.305: INFO: Pod "downwardapi-volume-1cb75600-ced3-48f6-bb3a-fd6d0e141358": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010963127s +STEP: Saw pod success +Oct 27 14:46:06.305: INFO: Pod "downwardapi-volume-1cb75600-ced3-48f6-bb3a-fd6d0e141358" satisfied condition "Succeeded or Failed" +Oct 27 14:46:06.309: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod downwardapi-volume-1cb75600-ced3-48f6-bb3a-fd6d0e141358 container client-container: +STEP: delete the pod +Oct 27 14:46:06.336: INFO: Waiting for pod downwardapi-volume-1cb75600-ced3-48f6-bb3a-fd6d0e141358 to disappear +Oct 27 14:46:06.341: INFO: Pod downwardapi-volume-1cb75600-ced3-48f6-bb3a-fd6d0e141358 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:46:06.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-3699" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":346,"completed":194,"skipped":3343,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-network] DNS + should provide DNS for ExternalName services [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:46:06.355: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename dns +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-8287 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide DNS for ExternalName services [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a test externalName service +STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8287.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8287.svc.cluster.local; sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8287.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8287.svc.cluster.local; sleep 1; done + +STEP: creating a pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Oct 27 14:46:08.612: INFO: DNS probes using dns-test-68938c75-afb3-47a1-8cde-8f5ec937853e succeeded + +STEP: deleting the pod +STEP: changing the externalName to bar.example.com +STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8287.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8287.svc.cluster.local; sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8287.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8287.svc.cluster.local; sleep 1; done + +STEP: creating a second pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Oct 27 14:46:10.685: INFO: File wheezy_udp@dns-test-service-3.dns-8287.svc.cluster.local from pod dns-8287/dns-test-52fb9ddc-20db-40cb-9a8e-01637ff5502a contains 'foo.example.com. +' instead of 'bar.example.com.' +Oct 27 14:46:10.737: INFO: File jessie_udp@dns-test-service-3.dns-8287.svc.cluster.local from pod dns-8287/dns-test-52fb9ddc-20db-40cb-9a8e-01637ff5502a contains 'foo.example.com. +' instead of 'bar.example.com.' +Oct 27 14:46:10.737: INFO: Lookups using dns-8287/dns-test-52fb9ddc-20db-40cb-9a8e-01637ff5502a failed for: [wheezy_udp@dns-test-service-3.dns-8287.svc.cluster.local jessie_udp@dns-test-service-3.dns-8287.svc.cluster.local] + +Oct 27 14:46:15.790: INFO: File wheezy_udp@dns-test-service-3.dns-8287.svc.cluster.local from pod dns-8287/dns-test-52fb9ddc-20db-40cb-9a8e-01637ff5502a contains 'foo.example.com. +' instead of 'bar.example.com.' +Oct 27 14:46:15.836: INFO: File jessie_udp@dns-test-service-3.dns-8287.svc.cluster.local from pod dns-8287/dns-test-52fb9ddc-20db-40cb-9a8e-01637ff5502a contains 'foo.example.com. +' instead of 'bar.example.com.' +Oct 27 14:46:15.836: INFO: Lookups using dns-8287/dns-test-52fb9ddc-20db-40cb-9a8e-01637ff5502a failed for: [wheezy_udp@dns-test-service-3.dns-8287.svc.cluster.local jessie_udp@dns-test-service-3.dns-8287.svc.cluster.local] + +Oct 27 14:46:20.747: INFO: File wheezy_udp@dns-test-service-3.dns-8287.svc.cluster.local from pod dns-8287/dns-test-52fb9ddc-20db-40cb-9a8e-01637ff5502a contains 'foo.example.com. +' instead of 'bar.example.com.' +Oct 27 14:46:20.799: INFO: File jessie_udp@dns-test-service-3.dns-8287.svc.cluster.local from pod dns-8287/dns-test-52fb9ddc-20db-40cb-9a8e-01637ff5502a contains 'foo.example.com. +' instead of 'bar.example.com.' +Oct 27 14:46:20.799: INFO: Lookups using dns-8287/dns-test-52fb9ddc-20db-40cb-9a8e-01637ff5502a failed for: [wheezy_udp@dns-test-service-3.dns-8287.svc.cluster.local jessie_udp@dns-test-service-3.dns-8287.svc.cluster.local] + +Oct 27 14:46:25.746: INFO: File wheezy_udp@dns-test-service-3.dns-8287.svc.cluster.local from pod dns-8287/dns-test-52fb9ddc-20db-40cb-9a8e-01637ff5502a contains 'foo.example.com. +' instead of 'bar.example.com.' +Oct 27 14:46:25.754: INFO: File jessie_udp@dns-test-service-3.dns-8287.svc.cluster.local from pod dns-8287/dns-test-52fb9ddc-20db-40cb-9a8e-01637ff5502a contains 'foo.example.com. +' instead of 'bar.example.com.' +Oct 27 14:46:25.754: INFO: Lookups using dns-8287/dns-test-52fb9ddc-20db-40cb-9a8e-01637ff5502a failed for: [wheezy_udp@dns-test-service-3.dns-8287.svc.cluster.local jessie_udp@dns-test-service-3.dns-8287.svc.cluster.local] + +Oct 27 14:46:30.746: INFO: File wheezy_udp@dns-test-service-3.dns-8287.svc.cluster.local from pod dns-8287/dns-test-52fb9ddc-20db-40cb-9a8e-01637ff5502a contains 'foo.example.com. +' instead of 'bar.example.com.' +Oct 27 14:46:30.754: INFO: File jessie_udp@dns-test-service-3.dns-8287.svc.cluster.local from pod dns-8287/dns-test-52fb9ddc-20db-40cb-9a8e-01637ff5502a contains 'foo.example.com. +' instead of 'bar.example.com.' +Oct 27 14:46:30.754: INFO: Lookups using dns-8287/dns-test-52fb9ddc-20db-40cb-9a8e-01637ff5502a failed for: [wheezy_udp@dns-test-service-3.dns-8287.svc.cluster.local jessie_udp@dns-test-service-3.dns-8287.svc.cluster.local] + +Oct 27 14:46:35.747: INFO: File wheezy_udp@dns-test-service-3.dns-8287.svc.cluster.local from pod dns-8287/dns-test-52fb9ddc-20db-40cb-9a8e-01637ff5502a contains 'foo.example.com. +' instead of 'bar.example.com.' +Oct 27 14:46:35.800: INFO: File jessie_udp@dns-test-service-3.dns-8287.svc.cluster.local from pod dns-8287/dns-test-52fb9ddc-20db-40cb-9a8e-01637ff5502a contains 'foo.example.com. +' instead of 'bar.example.com.' +Oct 27 14:46:35.800: INFO: Lookups using dns-8287/dns-test-52fb9ddc-20db-40cb-9a8e-01637ff5502a failed for: [wheezy_udp@dns-test-service-3.dns-8287.svc.cluster.local jessie_udp@dns-test-service-3.dns-8287.svc.cluster.local] + +Oct 27 14:46:40.755: INFO: DNS probes using dns-test-52fb9ddc-20db-40cb-9a8e-01637ff5502a succeeded + +STEP: deleting the pod +STEP: changing the service to type=ClusterIP +STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8287.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-8287.svc.cluster.local; sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8287.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-8287.svc.cluster.local; sleep 1; done + +STEP: creating a third pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Oct 27 14:46:42.897: INFO: DNS probes using dns-test-5fdd6b50-12e4-4baf-8564-c7355e1c1260 succeeded + +STEP: deleting the pod +STEP: deleting the test externalName service +[AfterEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:46:42.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-8287" for this suite. +•{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":346,"completed":195,"skipped":3358,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide podname only [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:46:42.933: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-1443 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should provide podname only [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 27 14:46:43.097: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9b7f43ba-b6d5-4f49-8036-d13e2003e2bd" in namespace "projected-1443" to be "Succeeded or Failed" +Oct 27 14:46:43.102: INFO: Pod "downwardapi-volume-9b7f43ba-b6d5-4f49-8036-d13e2003e2bd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.88712ms +Oct 27 14:46:45.108: INFO: Pod "downwardapi-volume-9b7f43ba-b6d5-4f49-8036-d13e2003e2bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011754926s +STEP: Saw pod success +Oct 27 14:46:45.109: INFO: Pod "downwardapi-volume-9b7f43ba-b6d5-4f49-8036-d13e2003e2bd" satisfied condition "Succeeded or Failed" +Oct 27 14:46:45.113: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod downwardapi-volume-9b7f43ba-b6d5-4f49-8036-d13e2003e2bd container client-container: +STEP: delete the pod +Oct 27 14:46:45.133: INFO: Waiting for pod downwardapi-volume-9b7f43ba-b6d5-4f49-8036-d13e2003e2bd to disappear +Oct 27 14:46:45.138: INFO: Pod downwardapi-volume-9b7f43ba-b6d5-4f49-8036-d13e2003e2bd no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:46:45.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-1443" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":346,"completed":196,"skipped":3391,"failed":0} +SSSSSSS +------------------------------ +[sig-node] Variable Expansion + should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:46:45.151: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename var-expansion +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in var-expansion-3569 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pod with failed condition +STEP: updating the pod +Oct 27 14:48:45.845: INFO: Successfully updated pod "var-expansion-9c77cdc7-f67f-40fd-86f6-2a2e2c61a02b" +STEP: waiting for pod running +STEP: deleting the pod gracefully +Oct 27 14:48:47.856: INFO: Deleting pod "var-expansion-9c77cdc7-f67f-40fd-86f6-2a2e2c61a02b" in namespace "var-expansion-3569" +Oct 27 14:48:47.862: INFO: Wait up to 5m0s for pod "var-expansion-9c77cdc7-f67f-40fd-86f6-2a2e2c61a02b" to be fully deleted +[AfterEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:49:19.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-3569" for this suite. +•{"msg":"PASSED [sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]","total":346,"completed":197,"skipped":3398,"failed":0} +SSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should update pod when spec was updated and update strategy is RollingUpdate [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:49:19.886: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename daemonsets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in daemonsets-1459 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:142 +[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:49:20.054: INFO: Creating simple daemon set daemon-set +STEP: Check that daemon pods launch on every node of the cluster. +Oct 27 14:49:20.069: INFO: Number of nodes with available pods: 0 +Oct 27 14:49:20.069: INFO: Node izgw81stpxs0bun38i01tfz is running more than one daemon pod +Oct 27 14:49:21.083: INFO: Number of nodes with available pods: 0 +Oct 27 14:49:21.083: INFO: Node izgw81stpxs0bun38i01tfz is running more than one daemon pod +Oct 27 14:49:22.083: INFO: Number of nodes with available pods: 2 +Oct 27 14:49:22.083: INFO: Number of running nodes: 2, number of available pods: 2 +STEP: Update daemon pods image. +STEP: Check that daemon pods images are updated. +Oct 27 14:49:22.121: INFO: Wrong image for pod: daemon-set-m7lf2. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. +Oct 27 14:49:23.132: INFO: Wrong image for pod: daemon-set-m7lf2. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. +Oct 27 14:49:24.131: INFO: Wrong image for pod: daemon-set-m7lf2. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. +Oct 27 14:49:25.176: INFO: Wrong image for pod: daemon-set-m7lf2. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. +Oct 27 14:49:25.176: INFO: Pod daemon-set-xh55b is not available +Oct 27 14:49:26.132: INFO: Wrong image for pod: daemon-set-m7lf2. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. +Oct 27 14:49:26.132: INFO: Pod daemon-set-xh55b is not available +Oct 27 14:49:28.132: INFO: Pod daemon-set-968mk is not available +STEP: Check that daemon pods are still running on every node of the cluster. +Oct 27 14:49:28.149: INFO: Number of nodes with available pods: 1 +Oct 27 14:49:28.149: INFO: Node izgw89f23rpcwrl79tpgp1z is running more than one daemon pod +Oct 27 14:49:29.165: INFO: Number of nodes with available pods: 2 +Oct 27 14:49:29.165: INFO: Number of running nodes: 2, number of available pods: 2 +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:108 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1459, will wait for the garbage collector to delete the pods +Oct 27 14:49:29.256: INFO: Deleting DaemonSet.extensions daemon-set took: 7.077736ms +Oct 27 14:49:29.356: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.250714ms +Oct 27 14:49:31.362: INFO: Number of nodes with available pods: 0 +Oct 27 14:49:31.362: INFO: Number of running nodes: 0, number of available pods: 0 +Oct 27 14:49:31.366: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"24574"},"items":null} + +Oct 27 14:49:31.371: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"24574"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:49:31.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-1459" for this suite. +•{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":346,"completed":198,"skipped":3405,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-network] Service endpoints latency + should not be very high [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Service endpoints latency + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:49:31.400: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename svc-latency +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in svc-latency-5061 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not be very high [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:49:31.550: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: creating replication controller svc-latency-rc in namespace svc-latency-5061 +I1027 14:49:31.561998 5703 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-5061, replica count: 1 +I1027 14:49:32.613763 5703 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Oct 27 14:49:32.725: INFO: Created: latency-svc-8qmkm +Oct 27 14:49:32.730: INFO: Got endpoints: latency-svc-8qmkm [15.693633ms] +Oct 27 14:49:32.738: INFO: Created: latency-svc-fdq75 +Oct 27 14:49:32.741: INFO: Got endpoints: latency-svc-fdq75 [11.502941ms] +Oct 27 14:49:32.742: INFO: Created: latency-svc-dj878 +Oct 27 14:49:32.745: INFO: Got endpoints: latency-svc-dj878 [14.998797ms] +Oct 27 14:49:32.746: INFO: Created: latency-svc-sj57d +Oct 27 14:49:32.747: INFO: Got endpoints: latency-svc-sj57d [17.12535ms] +Oct 27 14:49:32.751: INFO: Created: latency-svc-wlv7c +Oct 27 14:49:32.755: INFO: Got endpoints: latency-svc-wlv7c [24.867183ms] +Oct 27 14:49:32.755: INFO: Created: latency-svc-bbpt5 +Oct 27 14:49:32.759: INFO: Created: latency-svc-ljmmt +Oct 27 14:49:32.759: INFO: Got endpoints: latency-svc-bbpt5 [29.079666ms] +Oct 27 14:49:32.762: INFO: Got endpoints: latency-svc-ljmmt [32.573636ms] +Oct 27 14:49:32.763: INFO: Created: latency-svc-pj2jf +Oct 27 14:49:32.767: INFO: Got endpoints: latency-svc-pj2jf [36.797933ms] +Oct 27 14:49:32.771: INFO: Created: latency-svc-rfxrg +Oct 27 14:49:32.773: INFO: Got endpoints: latency-svc-rfxrg [43.027337ms] +Oct 27 14:49:32.776: INFO: Created: latency-svc-kshc5 +Oct 27 14:49:32.780: INFO: Created: latency-svc-pjlmq +Oct 27 14:49:32.780: INFO: Got endpoints: latency-svc-kshc5 [50.499333ms] +Oct 27 14:49:32.782: INFO: Got endpoints: latency-svc-pjlmq [52.085275ms] +Oct 27 14:49:32.785: INFO: Created: latency-svc-zlf6t +Oct 27 14:49:32.788: INFO: Got endpoints: latency-svc-zlf6t [57.77868ms] +Oct 27 14:49:32.788: INFO: Created: latency-svc-5qfjc +Oct 27 14:49:32.790: INFO: Got endpoints: latency-svc-5qfjc [59.535701ms] +Oct 27 14:49:32.793: INFO: Created: latency-svc-xbrs7 +Oct 27 14:49:32.795: INFO: Got endpoints: latency-svc-xbrs7 [65.137892ms] +Oct 27 14:49:32.797: INFO: Created: latency-svc-j9dvg +Oct 27 14:49:32.801: INFO: Created: latency-svc-srml7 +Oct 27 14:49:32.801: INFO: Got endpoints: latency-svc-j9dvg [71.176577ms] +Oct 27 14:49:32.861: INFO: Created: latency-svc-6j48f +Oct 27 14:49:32.862: INFO: Got endpoints: latency-svc-srml7 [131.622936ms] +Oct 27 14:49:32.865: INFO: Got endpoints: latency-svc-6j48f [123.158794ms] +Oct 27 14:49:32.866: INFO: Created: latency-svc-86mvp +Oct 27 14:49:32.874: INFO: Got endpoints: latency-svc-86mvp [128.642399ms] +Oct 27 14:49:32.875: INFO: Created: latency-svc-hcwp8 +Oct 27 14:49:32.880: INFO: Got endpoints: latency-svc-hcwp8 [132.936841ms] +Oct 27 14:49:32.880: INFO: Created: latency-svc-xswlg +Oct 27 14:49:32.883: INFO: Got endpoints: latency-svc-xswlg [127.83085ms] +Oct 27 14:49:32.887: INFO: Created: latency-svc-6q49f +Oct 27 14:49:32.890: INFO: Created: latency-svc-7s4cm +Oct 27 14:49:32.891: INFO: Got endpoints: latency-svc-6q49f [131.387728ms] +Oct 27 14:49:32.894: INFO: Got endpoints: latency-svc-7s4cm [131.157259ms] +Oct 27 14:49:32.895: INFO: Created: latency-svc-znwnz +Oct 27 14:49:32.898: INFO: Created: latency-svc-p4g66 +Oct 27 14:49:32.898: INFO: Got endpoints: latency-svc-znwnz [131.178084ms] +Oct 27 14:49:32.901: INFO: Got endpoints: latency-svc-p4g66 [128.272295ms] +Oct 27 14:49:32.902: INFO: Created: latency-svc-xsl5g +Oct 27 14:49:32.960: INFO: Got endpoints: latency-svc-xsl5g [179.139371ms] +Oct 27 14:49:32.960: INFO: Created: latency-svc-2d844 +Oct 27 14:49:32.963: INFO: Got endpoints: latency-svc-2d844 [181.15253ms] +Oct 27 14:49:32.966: INFO: Created: latency-svc-qhf2r +Oct 27 14:49:32.968: INFO: Got endpoints: latency-svc-qhf2r [180.299244ms] +Oct 27 14:49:32.971: INFO: Created: latency-svc-2bcqq +Oct 27 14:49:32.982: INFO: Got endpoints: latency-svc-2bcqq [192.568515ms] +Oct 27 14:49:32.983: INFO: Created: latency-svc-dlztq +Oct 27 14:49:32.986: INFO: Got endpoints: latency-svc-dlztq [190.465777ms] +Oct 27 14:49:32.988: INFO: Created: latency-svc-hndz9 +Oct 27 14:49:32.991: INFO: Got endpoints: latency-svc-hndz9 [189.666405ms] +Oct 27 14:49:32.992: INFO: Created: latency-svc-56rwh +Oct 27 14:49:32.994: INFO: Got endpoints: latency-svc-56rwh [132.537032ms] +Oct 27 14:49:32.997: INFO: Created: latency-svc-g2lz7 +Oct 27 14:49:33.003: INFO: Got endpoints: latency-svc-g2lz7 [137.851977ms] +Oct 27 14:49:33.060: INFO: Created: latency-svc-sgw7d +Oct 27 14:49:33.065: INFO: Created: latency-svc-98l6m +Oct 27 14:49:33.065: INFO: Got endpoints: latency-svc-sgw7d [190.881534ms] +Oct 27 14:49:33.067: INFO: Got endpoints: latency-svc-98l6m [186.887449ms] +Oct 27 14:49:33.069: INFO: Created: latency-svc-zbccp +Oct 27 14:49:33.071: INFO: Got endpoints: latency-svc-zbccp [188.163537ms] +Oct 27 14:49:33.073: INFO: Created: latency-svc-tgrvn +Oct 27 14:49:33.075: INFO: Got endpoints: latency-svc-tgrvn [184.185472ms] +Oct 27 14:49:33.077: INFO: Created: latency-svc-n8wp6 +Oct 27 14:49:33.080: INFO: Got endpoints: latency-svc-n8wp6 [186.253961ms] +Oct 27 14:49:33.082: INFO: Created: latency-svc-6xwlt +Oct 27 14:49:33.087: INFO: Created: latency-svc-f82xc +Oct 27 14:49:33.091: INFO: Created: latency-svc-jvvh2 +Oct 27 14:49:33.096: INFO: Created: latency-svc-dqljs +Oct 27 14:49:33.100: INFO: Created: latency-svc-hgqmw +Oct 27 14:49:33.104: INFO: Created: latency-svc-svn42 +Oct 27 14:49:33.161: INFO: Created: latency-svc-bntdc +Oct 27 14:49:33.165: INFO: Created: latency-svc-7lmvp +Oct 27 14:49:33.169: INFO: Created: latency-svc-dlbz4 +Oct 27 14:49:33.173: INFO: Created: latency-svc-p8zgz +Oct 27 14:49:33.177: INFO: Created: latency-svc-kxcj2 +Oct 27 14:49:33.181: INFO: Created: latency-svc-p4lkv +Oct 27 14:49:33.185: INFO: Created: latency-svc-jdbbc +Oct 27 14:49:33.192: INFO: Created: latency-svc-tdg7m +Oct 27 14:49:33.196: INFO: Created: latency-svc-xkggk +Oct 27 14:49:33.203: INFO: Got endpoints: latency-svc-6xwlt [304.728729ms] +Oct 27 14:49:33.204: INFO: Got endpoints: latency-svc-f82xc [302.091074ms] +Oct 27 14:49:33.212: INFO: Created: latency-svc-p9nnb +Oct 27 14:49:33.216: INFO: Created: latency-svc-kqkps +Oct 27 14:49:33.228: INFO: Got endpoints: latency-svc-jvvh2 [268.376706ms] +Oct 27 14:49:33.238: INFO: Created: latency-svc-vlzv7 +Oct 27 14:49:33.280: INFO: Got endpoints: latency-svc-dqljs [316.389499ms] +Oct 27 14:49:33.289: INFO: Created: latency-svc-pg7qq +Oct 27 14:49:33.329: INFO: Got endpoints: latency-svc-hgqmw [360.333388ms] +Oct 27 14:49:33.350: INFO: Created: latency-svc-w2lds +Oct 27 14:49:33.379: INFO: Got endpoints: latency-svc-svn42 [396.812143ms] +Oct 27 14:49:33.389: INFO: Created: latency-svc-9dvtl +Oct 27 14:49:33.428: INFO: Got endpoints: latency-svc-7lmvp [437.138327ms] +Oct 27 14:49:33.437: INFO: Created: latency-svc-c9pk8 +Oct 27 14:49:33.479: INFO: Got endpoints: latency-svc-bntdc [492.876729ms] +Oct 27 14:49:33.488: INFO: Created: latency-svc-pmnbz +Oct 27 14:49:33.529: INFO: Got endpoints: latency-svc-dlbz4 [534.778527ms] +Oct 27 14:49:33.538: INFO: Created: latency-svc-q2hmc +Oct 27 14:49:33.579: INFO: Got endpoints: latency-svc-p8zgz [576.641613ms] +Oct 27 14:49:33.588: INFO: Created: latency-svc-nvjth +Oct 27 14:49:33.629: INFO: Got endpoints: latency-svc-kxcj2 [564.591241ms] +Oct 27 14:49:33.639: INFO: Created: latency-svc-665z7 +Oct 27 14:49:33.679: INFO: Got endpoints: latency-svc-jdbbc [608.051404ms] +Oct 27 14:49:33.689: INFO: Created: latency-svc-cqmbp +Oct 27 14:49:33.731: INFO: Got endpoints: latency-svc-tdg7m [655.664798ms] +Oct 27 14:49:33.740: INFO: Created: latency-svc-q4rqs +Oct 27 14:49:33.780: INFO: Got endpoints: latency-svc-p4lkv [712.580138ms] +Oct 27 14:49:33.789: INFO: Created: latency-svc-265fx +Oct 27 14:49:33.829: INFO: Got endpoints: latency-svc-xkggk [748.701811ms] +Oct 27 14:49:33.838: INFO: Created: latency-svc-mzrzc +Oct 27 14:49:33.879: INFO: Got endpoints: latency-svc-p9nnb [675.748742ms] +Oct 27 14:49:33.888: INFO: Created: latency-svc-sct24 +Oct 27 14:49:33.930: INFO: Got endpoints: latency-svc-kqkps [726.104139ms] +Oct 27 14:49:33.939: INFO: Created: latency-svc-fjq7d +Oct 27 14:49:33.979: INFO: Got endpoints: latency-svc-vlzv7 [750.972811ms] +Oct 27 14:49:33.989: INFO: Created: latency-svc-lmj2c +Oct 27 14:49:34.029: INFO: Got endpoints: latency-svc-pg7qq [748.744153ms] +Oct 27 14:49:34.043: INFO: Created: latency-svc-m62tj +Oct 27 14:49:34.080: INFO: Got endpoints: latency-svc-w2lds [751.074848ms] +Oct 27 14:49:34.093: INFO: Created: latency-svc-qknj9 +Oct 27 14:49:34.129: INFO: Got endpoints: latency-svc-9dvtl [749.653164ms] +Oct 27 14:49:34.147: INFO: Created: latency-svc-7tz6q +Oct 27 14:49:34.261: INFO: Got endpoints: latency-svc-c9pk8 [832.791368ms] +Oct 27 14:49:34.263: INFO: Got endpoints: latency-svc-pmnbz [783.816931ms] +Oct 27 14:49:34.271: INFO: Created: latency-svc-g4hr9 +Oct 27 14:49:34.276: INFO: Created: latency-svc-7fpfl +Oct 27 14:49:34.363: INFO: Got endpoints: latency-svc-q2hmc [833.603019ms] +Oct 27 14:49:34.367: INFO: Got endpoints: latency-svc-nvjth [787.331012ms] +Oct 27 14:49:34.371: INFO: Created: latency-svc-g4r2z +Oct 27 14:49:34.376: INFO: Created: latency-svc-jtgww +Oct 27 14:49:34.378: INFO: Got endpoints: latency-svc-665z7 [748.019984ms] +Oct 27 14:49:34.459: INFO: Created: latency-svc-98qqd +Oct 27 14:49:34.461: INFO: Got endpoints: latency-svc-cqmbp [782.064839ms] +Oct 27 14:49:34.470: INFO: Created: latency-svc-kjlwg +Oct 27 14:49:34.479: INFO: Got endpoints: latency-svc-q4rqs [748.377725ms] +Oct 27 14:49:34.487: INFO: Created: latency-svc-n8k9h +Oct 27 14:49:34.529: INFO: Got endpoints: latency-svc-265fx [749.499892ms] +Oct 27 14:49:34.538: INFO: Created: latency-svc-472gd +Oct 27 14:49:34.580: INFO: Got endpoints: latency-svc-mzrzc [751.600134ms] +Oct 27 14:49:34.589: INFO: Created: latency-svc-8hn4z +Oct 27 14:49:34.630: INFO: Got endpoints: latency-svc-sct24 [750.881352ms] +Oct 27 14:49:34.639: INFO: Created: latency-svc-zj5ts +Oct 27 14:49:34.679: INFO: Got endpoints: latency-svc-fjq7d [749.499028ms] +Oct 27 14:49:34.688: INFO: Created: latency-svc-cjml2 +Oct 27 14:49:34.732: INFO: Got endpoints: latency-svc-lmj2c [753.10519ms] +Oct 27 14:49:34.741: INFO: Created: latency-svc-965bc +Oct 27 14:49:34.781: INFO: Got endpoints: latency-svc-m62tj [751.925091ms] +Oct 27 14:49:34.789: INFO: Created: latency-svc-smvqs +Oct 27 14:49:34.828: INFO: Got endpoints: latency-svc-qknj9 [748.187991ms] +Oct 27 14:49:34.838: INFO: Created: latency-svc-7jp9g +Oct 27 14:49:34.880: INFO: Got endpoints: latency-svc-7tz6q [750.726237ms] +Oct 27 14:49:34.889: INFO: Created: latency-svc-9wn2d +Oct 27 14:49:34.929: INFO: Got endpoints: latency-svc-g4hr9 [668.19243ms] +Oct 27 14:49:34.940: INFO: Created: latency-svc-5snkt +Oct 27 14:49:34.979: INFO: Got endpoints: latency-svc-7fpfl [715.981025ms] +Oct 27 14:49:34.991: INFO: Created: latency-svc-r4kdc +Oct 27 14:49:35.028: INFO: Got endpoints: latency-svc-g4r2z [664.824298ms] +Oct 27 14:49:35.036: INFO: Created: latency-svc-t2cg9 +Oct 27 14:49:35.079: INFO: Got endpoints: latency-svc-jtgww [712.045828ms] +Oct 27 14:49:35.088: INFO: Created: latency-svc-b4j5w +Oct 27 14:49:35.129: INFO: Got endpoints: latency-svc-98qqd [751.772547ms] +Oct 27 14:49:35.145: INFO: Created: latency-svc-vjl5k +Oct 27 14:49:35.180: INFO: Got endpoints: latency-svc-kjlwg [718.534476ms] +Oct 27 14:49:35.189: INFO: Created: latency-svc-7gzv6 +Oct 27 14:49:35.230: INFO: Got endpoints: latency-svc-n8k9h [750.97779ms] +Oct 27 14:49:35.247: INFO: Created: latency-svc-b9nkb +Oct 27 14:49:35.279: INFO: Got endpoints: latency-svc-472gd [749.75798ms] +Oct 27 14:49:35.288: INFO: Created: latency-svc-mrt7t +Oct 27 14:49:35.329: INFO: Got endpoints: latency-svc-8hn4z [748.34755ms] +Oct 27 14:49:35.346: INFO: Created: latency-svc-wbd7h +Oct 27 14:49:35.379: INFO: Got endpoints: latency-svc-zj5ts [749.185578ms] +Oct 27 14:49:35.388: INFO: Created: latency-svc-zgx87 +Oct 27 14:49:35.430: INFO: Got endpoints: latency-svc-cjml2 [750.854873ms] +Oct 27 14:49:35.447: INFO: Created: latency-svc-v85zk +Oct 27 14:49:35.479: INFO: Got endpoints: latency-svc-965bc [746.503972ms] +Oct 27 14:49:35.489: INFO: Created: latency-svc-r42p2 +Oct 27 14:49:35.529: INFO: Got endpoints: latency-svc-smvqs [748.373608ms] +Oct 27 14:49:35.547: INFO: Created: latency-svc-76kd6 +Oct 27 14:49:35.579: INFO: Got endpoints: latency-svc-7jp9g [751.389536ms] +Oct 27 14:49:35.589: INFO: Created: latency-svc-lbdsb +Oct 27 14:49:35.629: INFO: Got endpoints: latency-svc-9wn2d [749.821155ms] +Oct 27 14:49:35.644: INFO: Created: latency-svc-mp2xl +Oct 27 14:49:35.681: INFO: Got endpoints: latency-svc-5snkt [752.164811ms] +Oct 27 14:49:35.691: INFO: Created: latency-svc-4wt8r +Oct 27 14:49:35.730: INFO: Got endpoints: latency-svc-r4kdc [750.958497ms] +Oct 27 14:49:35.745: INFO: Created: latency-svc-r5nx6 +Oct 27 14:49:35.780: INFO: Got endpoints: latency-svc-t2cg9 [751.779245ms] +Oct 27 14:49:35.789: INFO: Created: latency-svc-wdwml +Oct 27 14:49:35.833: INFO: Got endpoints: latency-svc-b4j5w [754.150142ms] +Oct 27 14:49:35.849: INFO: Created: latency-svc-2l8cn +Oct 27 14:49:35.879: INFO: Got endpoints: latency-svc-vjl5k [749.770793ms] +Oct 27 14:49:35.891: INFO: Created: latency-svc-dsdn4 +Oct 27 14:49:35.931: INFO: Got endpoints: latency-svc-7gzv6 [750.899342ms] +Oct 27 14:49:35.947: INFO: Created: latency-svc-z6j4j +Oct 27 14:49:35.979: INFO: Got endpoints: latency-svc-b9nkb [749.060994ms] +Oct 27 14:49:35.993: INFO: Created: latency-svc-rkz9w +Oct 27 14:49:36.030: INFO: Got endpoints: latency-svc-mrt7t [750.629229ms] +Oct 27 14:49:36.046: INFO: Created: latency-svc-xddv2 +Oct 27 14:49:36.081: INFO: Got endpoints: latency-svc-wbd7h [752.293072ms] +Oct 27 14:49:36.093: INFO: Created: latency-svc-dkhc7 +Oct 27 14:49:36.129: INFO: Got endpoints: latency-svc-zgx87 [749.881484ms] +Oct 27 14:49:36.143: INFO: Created: latency-svc-bm8mn +Oct 27 14:49:36.180: INFO: Got endpoints: latency-svc-v85zk [749.413632ms] +Oct 27 14:49:36.188: INFO: Created: latency-svc-lfptv +Oct 27 14:49:36.231: INFO: Got endpoints: latency-svc-r42p2 [751.721326ms] +Oct 27 14:49:36.242: INFO: Created: latency-svc-zzbqx +Oct 27 14:49:36.279: INFO: Got endpoints: latency-svc-76kd6 [749.94198ms] +Oct 27 14:49:36.288: INFO: Created: latency-svc-f2z5f +Oct 27 14:49:36.329: INFO: Got endpoints: latency-svc-lbdsb [749.413885ms] +Oct 27 14:49:36.338: INFO: Created: latency-svc-gpvl7 +Oct 27 14:49:36.378: INFO: Got endpoints: latency-svc-mp2xl [748.257048ms] +Oct 27 14:49:36.387: INFO: Created: latency-svc-r6vwb +Oct 27 14:49:36.430: INFO: Got endpoints: latency-svc-4wt8r [748.023667ms] +Oct 27 14:49:36.438: INFO: Created: latency-svc-f59cn +Oct 27 14:49:36.480: INFO: Got endpoints: latency-svc-r5nx6 [750.692593ms] +Oct 27 14:49:36.489: INFO: Created: latency-svc-kbf2s +Oct 27 14:49:36.528: INFO: Got endpoints: latency-svc-wdwml [748.329024ms] +Oct 27 14:49:36.537: INFO: Created: latency-svc-xmnrb +Oct 27 14:49:36.578: INFO: Got endpoints: latency-svc-2l8cn [744.967865ms] +Oct 27 14:49:36.587: INFO: Created: latency-svc-bzpbr +Oct 27 14:49:36.628: INFO: Got endpoints: latency-svc-dsdn4 [749.143268ms] +Oct 27 14:49:36.659: INFO: Created: latency-svc-9cjp5 +Oct 27 14:49:36.678: INFO: Got endpoints: latency-svc-z6j4j [747.319061ms] +Oct 27 14:49:36.687: INFO: Created: latency-svc-h4xxv +Oct 27 14:49:36.728: INFO: Got endpoints: latency-svc-rkz9w [748.931501ms] +Oct 27 14:49:36.737: INFO: Created: latency-svc-dvrx6 +Oct 27 14:49:36.780: INFO: Got endpoints: latency-svc-xddv2 [750.007003ms] +Oct 27 14:49:36.789: INFO: Created: latency-svc-bnxpr +Oct 27 14:49:36.830: INFO: Got endpoints: latency-svc-dkhc7 [749.162515ms] +Oct 27 14:49:36.839: INFO: Created: latency-svc-55bzn +Oct 27 14:49:36.879: INFO: Got endpoints: latency-svc-bm8mn [749.852549ms] +Oct 27 14:49:36.888: INFO: Created: latency-svc-pnjmf +Oct 27 14:49:36.929: INFO: Got endpoints: latency-svc-lfptv [748.95091ms] +Oct 27 14:49:36.938: INFO: Created: latency-svc-rrd9s +Oct 27 14:49:36.979: INFO: Got endpoints: latency-svc-zzbqx [748.687431ms] +Oct 27 14:49:36.988: INFO: Created: latency-svc-ngg5x +Oct 27 14:49:37.029: INFO: Got endpoints: latency-svc-f2z5f [749.625153ms] +Oct 27 14:49:37.038: INFO: Created: latency-svc-ng2df +Oct 27 14:49:37.080: INFO: Got endpoints: latency-svc-gpvl7 [750.683077ms] +Oct 27 14:49:37.088: INFO: Created: latency-svc-tpcpl +Oct 27 14:49:37.129: INFO: Got endpoints: latency-svc-r6vwb [751.069313ms] +Oct 27 14:49:37.139: INFO: Created: latency-svc-sj5l6 +Oct 27 14:49:37.180: INFO: Got endpoints: latency-svc-f59cn [750.808143ms] +Oct 27 14:49:37.189: INFO: Created: latency-svc-v6987 +Oct 27 14:49:37.228: INFO: Got endpoints: latency-svc-kbf2s [747.948607ms] +Oct 27 14:49:37.238: INFO: Created: latency-svc-5jqqw +Oct 27 14:49:37.279: INFO: Got endpoints: latency-svc-xmnrb [750.778012ms] +Oct 27 14:49:37.287: INFO: Created: latency-svc-fgfs7 +Oct 27 14:49:37.328: INFO: Got endpoints: latency-svc-bzpbr [750.265825ms] +Oct 27 14:49:37.337: INFO: Created: latency-svc-fj5d2 +Oct 27 14:49:37.379: INFO: Got endpoints: latency-svc-9cjp5 [750.778093ms] +Oct 27 14:49:37.388: INFO: Created: latency-svc-4fhb8 +Oct 27 14:49:37.431: INFO: Got endpoints: latency-svc-h4xxv [752.877892ms] +Oct 27 14:49:37.446: INFO: Created: latency-svc-9qzgr +Oct 27 14:49:37.478: INFO: Got endpoints: latency-svc-dvrx6 [749.998985ms] +Oct 27 14:49:37.487: INFO: Created: latency-svc-dxnbq +Oct 27 14:49:37.529: INFO: Got endpoints: latency-svc-bnxpr [749.16458ms] +Oct 27 14:49:37.538: INFO: Created: latency-svc-r6j69 +Oct 27 14:49:37.579: INFO: Got endpoints: latency-svc-55bzn [748.549636ms] +Oct 27 14:49:37.588: INFO: Created: latency-svc-gtf6z +Oct 27 14:49:37.629: INFO: Got endpoints: latency-svc-pnjmf [750.432632ms] +Oct 27 14:49:37.640: INFO: Created: latency-svc-mxprq +Oct 27 14:49:37.679: INFO: Got endpoints: latency-svc-rrd9s [750.188885ms] +Oct 27 14:49:37.688: INFO: Created: latency-svc-fhmvm +Oct 27 14:49:37.729: INFO: Got endpoints: latency-svc-ngg5x [749.564869ms] +Oct 27 14:49:37.741: INFO: Created: latency-svc-vtmh6 +Oct 27 14:49:37.780: INFO: Got endpoints: latency-svc-ng2df [751.384168ms] +Oct 27 14:49:37.789: INFO: Created: latency-svc-vg82r +Oct 27 14:49:37.829: INFO: Got endpoints: latency-svc-tpcpl [749.718332ms] +Oct 27 14:49:37.838: INFO: Created: latency-svc-qmvs2 +Oct 27 14:49:37.880: INFO: Got endpoints: latency-svc-sj5l6 [750.569514ms] +Oct 27 14:49:37.900: INFO: Created: latency-svc-mzfbs +Oct 27 14:49:37.929: INFO: Got endpoints: latency-svc-v6987 [748.790648ms] +Oct 27 14:49:37.946: INFO: Created: latency-svc-5lhg8 +Oct 27 14:49:37.980: INFO: Got endpoints: latency-svc-5jqqw [751.493921ms] +Oct 27 14:49:37.989: INFO: Created: latency-svc-4hr6p +Oct 27 14:49:38.030: INFO: Got endpoints: latency-svc-fgfs7 [750.694764ms] +Oct 27 14:49:38.043: INFO: Created: latency-svc-gw6qh +Oct 27 14:49:38.079: INFO: Got endpoints: latency-svc-fj5d2 [750.651364ms] +Oct 27 14:49:38.088: INFO: Created: latency-svc-6f98p +Oct 27 14:49:38.130: INFO: Got endpoints: latency-svc-4fhb8 [750.41198ms] +Oct 27 14:49:38.139: INFO: Created: latency-svc-lfkkh +Oct 27 14:49:38.179: INFO: Got endpoints: latency-svc-9qzgr [748.30885ms] +Oct 27 14:49:38.189: INFO: Created: latency-svc-5whkt +Oct 27 14:49:38.231: INFO: Got endpoints: latency-svc-dxnbq [752.497639ms] +Oct 27 14:49:38.240: INFO: Created: latency-svc-v7j9s +Oct 27 14:49:38.279: INFO: Got endpoints: latency-svc-r6j69 [749.627807ms] +Oct 27 14:49:38.296: INFO: Created: latency-svc-4gnx5 +Oct 27 14:49:38.329: INFO: Got endpoints: latency-svc-gtf6z [749.426888ms] +Oct 27 14:49:38.347: INFO: Created: latency-svc-hnmzk +Oct 27 14:49:38.378: INFO: Got endpoints: latency-svc-mxprq [748.883123ms] +Oct 27 14:49:38.387: INFO: Created: latency-svc-ss8rh +Oct 27 14:49:38.429: INFO: Got endpoints: latency-svc-fhmvm [750.220776ms] +Oct 27 14:49:38.441: INFO: Created: latency-svc-2tqsb +Oct 27 14:49:38.481: INFO: Got endpoints: latency-svc-vtmh6 [752.02085ms] +Oct 27 14:49:38.491: INFO: Created: latency-svc-q9s9d +Oct 27 14:49:38.530: INFO: Got endpoints: latency-svc-vg82r [749.889724ms] +Oct 27 14:49:38.539: INFO: Created: latency-svc-cwwn7 +Oct 27 14:49:38.579: INFO: Got endpoints: latency-svc-qmvs2 [749.89157ms] +Oct 27 14:49:38.589: INFO: Created: latency-svc-rtq9s +Oct 27 14:49:38.629: INFO: Got endpoints: latency-svc-mzfbs [749.202797ms] +Oct 27 14:49:38.638: INFO: Created: latency-svc-vhzxc +Oct 27 14:49:38.679: INFO: Got endpoints: latency-svc-5lhg8 [749.713217ms] +Oct 27 14:49:38.689: INFO: Created: latency-svc-sknd4 +Oct 27 14:49:38.730: INFO: Got endpoints: latency-svc-4hr6p [749.923258ms] +Oct 27 14:49:38.740: INFO: Created: latency-svc-q9fzq +Oct 27 14:49:38.779: INFO: Got endpoints: latency-svc-gw6qh [749.515923ms] +Oct 27 14:49:38.791: INFO: Created: latency-svc-6xnkq +Oct 27 14:49:38.829: INFO: Got endpoints: latency-svc-6f98p [749.468851ms] +Oct 27 14:49:38.838: INFO: Created: latency-svc-92m7c +Oct 27 14:49:38.879: INFO: Got endpoints: latency-svc-lfkkh [749.581968ms] +Oct 27 14:49:38.889: INFO: Created: latency-svc-6v6zg +Oct 27 14:49:38.931: INFO: Got endpoints: latency-svc-5whkt [751.763631ms] +Oct 27 14:49:38.942: INFO: Created: latency-svc-k2w8h +Oct 27 14:49:38.979: INFO: Got endpoints: latency-svc-v7j9s [748.245673ms] +Oct 27 14:49:38.988: INFO: Created: latency-svc-t9jws +Oct 27 14:49:39.031: INFO: Got endpoints: latency-svc-4gnx5 [751.818166ms] +Oct 27 14:49:39.039: INFO: Created: latency-svc-fxrj8 +Oct 27 14:49:39.079: INFO: Got endpoints: latency-svc-hnmzk [750.249483ms] +Oct 27 14:49:39.087: INFO: Created: latency-svc-2qgm6 +Oct 27 14:49:39.129: INFO: Got endpoints: latency-svc-ss8rh [750.484752ms] +Oct 27 14:49:39.138: INFO: Created: latency-svc-qrdhr +Oct 27 14:49:39.181: INFO: Got endpoints: latency-svc-2tqsb [751.881037ms] +Oct 27 14:49:39.190: INFO: Created: latency-svc-q96n8 +Oct 27 14:49:39.229: INFO: Got endpoints: latency-svc-q9s9d [747.735654ms] +Oct 27 14:49:39.238: INFO: Created: latency-svc-zk9tt +Oct 27 14:49:39.279: INFO: Got endpoints: latency-svc-cwwn7 [749.146457ms] +Oct 27 14:49:39.288: INFO: Created: latency-svc-5k5d6 +Oct 27 14:49:39.329: INFO: Got endpoints: latency-svc-rtq9s [749.554135ms] +Oct 27 14:49:39.338: INFO: Created: latency-svc-wf5vb +Oct 27 14:49:39.379: INFO: Got endpoints: latency-svc-vhzxc [750.057524ms] +Oct 27 14:49:39.389: INFO: Created: latency-svc-66l4p +Oct 27 14:49:39.429: INFO: Got endpoints: latency-svc-sknd4 [749.64945ms] +Oct 27 14:49:39.438: INFO: Created: latency-svc-f9pq2 +Oct 27 14:49:39.480: INFO: Got endpoints: latency-svc-q9fzq [749.660689ms] +Oct 27 14:49:39.492: INFO: Created: latency-svc-2jv2n +Oct 27 14:49:39.529: INFO: Got endpoints: latency-svc-6xnkq [749.667182ms] +Oct 27 14:49:39.538: INFO: Created: latency-svc-6x4m8 +Oct 27 14:49:39.579: INFO: Got endpoints: latency-svc-92m7c [750.013583ms] +Oct 27 14:49:39.588: INFO: Created: latency-svc-25zrq +Oct 27 14:49:39.630: INFO: Got endpoints: latency-svc-6v6zg [750.173557ms] +Oct 27 14:49:39.639: INFO: Created: latency-svc-29mjk +Oct 27 14:49:39.679: INFO: Got endpoints: latency-svc-k2w8h [747.47759ms] +Oct 27 14:49:39.688: INFO: Created: latency-svc-wbn68 +Oct 27 14:49:39.729: INFO: Got endpoints: latency-svc-t9jws [749.548588ms] +Oct 27 14:49:39.738: INFO: Created: latency-svc-5sp5m +Oct 27 14:49:39.780: INFO: Got endpoints: latency-svc-fxrj8 [749.180396ms] +Oct 27 14:49:39.789: INFO: Created: latency-svc-l99ch +Oct 27 14:49:39.856: INFO: Got endpoints: latency-svc-2qgm6 [777.437513ms] +Oct 27 14:49:39.865: INFO: Created: latency-svc-xf94c +Oct 27 14:49:39.879: INFO: Got endpoints: latency-svc-qrdhr [749.77579ms] +Oct 27 14:49:39.894: INFO: Created: latency-svc-tzmhz +Oct 27 14:49:39.929: INFO: Got endpoints: latency-svc-q96n8 [747.413693ms] +Oct 27 14:49:39.938: INFO: Created: latency-svc-7bbvn +Oct 27 14:49:39.979: INFO: Got endpoints: latency-svc-zk9tt [749.634199ms] +Oct 27 14:49:39.988: INFO: Created: latency-svc-vskvp +Oct 27 14:49:40.030: INFO: Got endpoints: latency-svc-5k5d6 [750.341677ms] +Oct 27 14:49:40.039: INFO: Created: latency-svc-s6plb +Oct 27 14:49:40.079: INFO: Got endpoints: latency-svc-wf5vb [749.458879ms] +Oct 27 14:49:40.088: INFO: Created: latency-svc-bqvzf +Oct 27 14:49:40.128: INFO: Got endpoints: latency-svc-66l4p [749.394708ms] +Oct 27 14:49:40.138: INFO: Created: latency-svc-pwz47 +Oct 27 14:49:40.179: INFO: Got endpoints: latency-svc-f9pq2 [750.153684ms] +Oct 27 14:49:40.188: INFO: Created: latency-svc-6tddh +Oct 27 14:49:40.230: INFO: Got endpoints: latency-svc-2jv2n [749.764985ms] +Oct 27 14:49:40.249: INFO: Created: latency-svc-vjfmf +Oct 27 14:49:40.364: INFO: Got endpoints: latency-svc-6x4m8 [834.702605ms] +Oct 27 14:49:40.364: INFO: Got endpoints: latency-svc-25zrq [785.576833ms] +Oct 27 14:49:40.375: INFO: Created: latency-svc-t7ctv +Oct 27 14:49:40.460: INFO: Created: latency-svc-nttqz +Oct 27 14:49:40.461: INFO: Got endpoints: latency-svc-29mjk [831.361428ms] +Oct 27 14:49:40.463: INFO: Got endpoints: latency-svc-wbn68 [784.024931ms] +Oct 27 14:49:40.470: INFO: Created: latency-svc-hbg9c +Oct 27 14:49:40.559: INFO: Got endpoints: latency-svc-5sp5m [830.582963ms] +Oct 27 14:49:40.563: INFO: Got endpoints: latency-svc-l99ch [783.644443ms] +Oct 27 14:49:40.564: INFO: Created: latency-svc-2srf9 +Oct 27 14:49:40.568: INFO: Created: latency-svc-c7nbb +Oct 27 14:49:40.572: INFO: Created: latency-svc-bwglq +Oct 27 14:49:40.579: INFO: Got endpoints: latency-svc-xf94c [722.28126ms] +Oct 27 14:49:40.629: INFO: Got endpoints: latency-svc-tzmhz [750.296378ms] +Oct 27 14:49:40.679: INFO: Got endpoints: latency-svc-7bbvn [750.421459ms] +Oct 27 14:49:40.739: INFO: Got endpoints: latency-svc-vskvp [760.306973ms] +Oct 27 14:49:40.779: INFO: Got endpoints: latency-svc-s6plb [748.755222ms] +Oct 27 14:49:40.829: INFO: Got endpoints: latency-svc-bqvzf [750.338058ms] +Oct 27 14:49:40.879: INFO: Got endpoints: latency-svc-pwz47 [750.442291ms] +Oct 27 14:49:40.930: INFO: Got endpoints: latency-svc-6tddh [750.461169ms] +Oct 27 14:49:40.979: INFO: Got endpoints: latency-svc-vjfmf [749.360841ms] +Oct 27 14:49:41.029: INFO: Got endpoints: latency-svc-t7ctv [664.813817ms] +Oct 27 14:49:41.079: INFO: Got endpoints: latency-svc-nttqz [714.732035ms] +Oct 27 14:49:41.129: INFO: Got endpoints: latency-svc-hbg9c [668.343445ms] +Oct 27 14:49:41.180: INFO: Got endpoints: latency-svc-2srf9 [716.868285ms] +Oct 27 14:49:41.229: INFO: Got endpoints: latency-svc-c7nbb [669.138994ms] +Oct 27 14:49:41.279: INFO: Got endpoints: latency-svc-bwglq [715.782325ms] +Oct 27 14:49:41.279: INFO: Latencies: [11.502941ms 14.998797ms 17.12535ms 24.867183ms 29.079666ms 32.573636ms 36.797933ms 43.027337ms 50.499333ms 52.085275ms 57.77868ms 59.535701ms 65.137892ms 71.176577ms 123.158794ms 127.83085ms 128.272295ms 128.642399ms 131.157259ms 131.178084ms 131.387728ms 131.622936ms 132.537032ms 132.936841ms 137.851977ms 179.139371ms 180.299244ms 181.15253ms 184.185472ms 186.253961ms 186.887449ms 188.163537ms 189.666405ms 190.465777ms 190.881534ms 192.568515ms 268.376706ms 302.091074ms 304.728729ms 316.389499ms 360.333388ms 396.812143ms 437.138327ms 492.876729ms 534.778527ms 564.591241ms 576.641613ms 608.051404ms 655.664798ms 664.813817ms 664.824298ms 668.19243ms 668.343445ms 669.138994ms 675.748742ms 712.045828ms 712.580138ms 714.732035ms 715.782325ms 715.981025ms 716.868285ms 718.534476ms 722.28126ms 726.104139ms 744.967865ms 746.503972ms 747.319061ms 747.413693ms 747.47759ms 747.735654ms 747.948607ms 748.019984ms 748.023667ms 748.187991ms 748.245673ms 748.257048ms 748.30885ms 748.329024ms 748.34755ms 748.373608ms 748.377725ms 748.549636ms 748.687431ms 748.701811ms 748.744153ms 748.755222ms 748.790648ms 748.883123ms 748.931501ms 748.95091ms 749.060994ms 749.143268ms 749.146457ms 749.162515ms 749.16458ms 749.180396ms 749.185578ms 749.202797ms 749.360841ms 749.394708ms 749.413632ms 749.413885ms 749.426888ms 749.458879ms 749.468851ms 749.499028ms 749.499892ms 749.515923ms 749.548588ms 749.554135ms 749.564869ms 749.581968ms 749.625153ms 749.627807ms 749.634199ms 749.64945ms 749.653164ms 749.660689ms 749.667182ms 749.713217ms 749.718332ms 749.75798ms 749.764985ms 749.770793ms 749.77579ms 749.821155ms 749.852549ms 749.881484ms 749.889724ms 749.89157ms 749.923258ms 749.94198ms 749.998985ms 750.007003ms 750.013583ms 750.057524ms 750.153684ms 750.173557ms 750.188885ms 750.220776ms 750.249483ms 750.265825ms 750.296378ms 750.338058ms 750.341677ms 750.41198ms 750.421459ms 750.432632ms 750.442291ms 750.461169ms 750.484752ms 750.569514ms 750.629229ms 750.651364ms 750.683077ms 750.692593ms 750.694764ms 750.726237ms 750.778012ms 750.778093ms 750.808143ms 750.854873ms 750.881352ms 750.899342ms 750.958497ms 750.972811ms 750.97779ms 751.069313ms 751.074848ms 751.384168ms 751.389536ms 751.493921ms 751.600134ms 751.721326ms 751.763631ms 751.772547ms 751.779245ms 751.818166ms 751.881037ms 751.925091ms 752.02085ms 752.164811ms 752.293072ms 752.497639ms 752.877892ms 753.10519ms 754.150142ms 760.306973ms 777.437513ms 782.064839ms 783.644443ms 783.816931ms 784.024931ms 785.576833ms 787.331012ms 830.582963ms 831.361428ms 832.791368ms 833.603019ms 834.702605ms] +Oct 27 14:49:41.280: INFO: 50 %ile: 749.413632ms +Oct 27 14:49:41.280: INFO: 90 %ile: 752.02085ms +Oct 27 14:49:41.280: INFO: 99 %ile: 833.603019ms +Oct 27 14:49:41.280: INFO: Total sample count: 200 +[AfterEach] [sig-network] Service endpoints latency + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:49:41.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svc-latency-5061" for this suite. +•{"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":346,"completed":199,"skipped":3418,"failed":0} +SSSS +------------------------------ +[sig-node] Docker Containers + should use the image defaults if command and args are blank [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Docker Containers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:49:41.295: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename containers +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in containers-713 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-node] Docker Containers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:49:43.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "containers-713" for this suite. +•{"msg":"PASSED [sig-node] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":346,"completed":200,"skipped":3422,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Container Runtime blackbox test on terminated container + should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:49:43.537: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-runtime +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-9278 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the container +STEP: wait for the container to reach Succeeded +STEP: get the container status +STEP: the container should be terminated +STEP: the termination message should be set +Oct 27 14:49:45.722: INFO: Expected: &{OK} to match Container's Termination Message: OK -- +STEP: delete the container +[AfterEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:49:45.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-runtime-9278" for this suite. +•{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":346,"completed":201,"skipped":3446,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:49:45.749: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-2593 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating secret with name secret-test-f7aacaab-8f7c-4d9b-a9e6-5895ddf0b8db +STEP: Creating a pod to test consume secrets +Oct 27 14:49:45.917: INFO: Waiting up to 5m0s for pod "pod-secrets-c28d6b40-531d-46f8-bb42-1f9b77e8fbbc" in namespace "secrets-2593" to be "Succeeded or Failed" +Oct 27 14:49:45.921: INFO: Pod "pod-secrets-c28d6b40-531d-46f8-bb42-1f9b77e8fbbc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.305879ms +Oct 27 14:49:47.928: INFO: Pod "pod-secrets-c28d6b40-531d-46f8-bb42-1f9b77e8fbbc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011036846s +STEP: Saw pod success +Oct 27 14:49:47.928: INFO: Pod "pod-secrets-c28d6b40-531d-46f8-bb42-1f9b77e8fbbc" satisfied condition "Succeeded or Failed" +Oct 27 14:49:47.961: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod pod-secrets-c28d6b40-531d-46f8-bb42-1f9b77e8fbbc container secret-volume-test: +STEP: delete the pod +Oct 27 14:49:47.982: INFO: Waiting for pod pod-secrets-c28d6b40-531d-46f8-bb42-1f9b77e8fbbc to disappear +Oct 27 14:49:47.987: INFO: Pod pod-secrets-c28d6b40-531d-46f8-bb42-1f9b77e8fbbc no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:49:47.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-2593" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":202,"skipped":3472,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should run and stop complex daemon [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:49:48.000: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename daemonsets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in daemonsets-1747 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:142 +[It] should run and stop complex daemon [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:49:48.204: INFO: Creating daemon "daemon-set" with a node selector +STEP: Initially, daemon pods should not be running on any nodes. +Oct 27 14:49:48.214: INFO: Number of nodes with available pods: 0 +Oct 27 14:49:48.214: INFO: Number of running nodes: 0, number of available pods: 0 +STEP: Change node label to blue, check that daemon pod is launched. +Oct 27 14:49:48.274: INFO: Number of nodes with available pods: 0 +Oct 27 14:49:48.274: INFO: Node izgw89f23rpcwrl79tpgp1z is running more than one daemon pod +Oct 27 14:49:49.279: INFO: Number of nodes with available pods: 0 +Oct 27 14:49:49.279: INFO: Node izgw89f23rpcwrl79tpgp1z is running more than one daemon pod +Oct 27 14:49:50.279: INFO: Number of nodes with available pods: 1 +Oct 27 14:49:50.279: INFO: Number of running nodes: 1, number of available pods: 1 +STEP: Update the node label to green, and wait for daemons to be unscheduled +Oct 27 14:49:50.359: INFO: Number of nodes with available pods: 0 +Oct 27 14:49:50.359: INFO: Number of running nodes: 0, number of available pods: 0 +STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate +Oct 27 14:49:50.379: INFO: Number of nodes with available pods: 0 +Oct 27 14:49:50.379: INFO: Node izgw89f23rpcwrl79tpgp1z is running more than one daemon pod +Oct 27 14:49:51.384: INFO: Number of nodes with available pods: 0 +Oct 27 14:49:51.385: INFO: Node izgw89f23rpcwrl79tpgp1z is running more than one daemon pod +Oct 27 14:49:52.385: INFO: Number of nodes with available pods: 0 +Oct 27 14:49:52.385: INFO: Node izgw89f23rpcwrl79tpgp1z is running more than one daemon pod +Oct 27 14:49:53.385: INFO: Number of nodes with available pods: 0 +Oct 27 14:49:53.385: INFO: Node izgw89f23rpcwrl79tpgp1z is running more than one daemon pod +Oct 27 14:49:54.384: INFO: Number of nodes with available pods: 1 +Oct 27 14:49:54.384: INFO: Number of running nodes: 1, number of available pods: 1 +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:108 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1747, will wait for the garbage collector to delete the pods +Oct 27 14:49:54.454: INFO: Deleting DaemonSet.extensions daemon-set took: 6.088009ms +Oct 27 14:49:54.555: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.568362ms +Oct 27 14:49:56.460: INFO: Number of nodes with available pods: 0 +Oct 27 14:49:56.460: INFO: Number of running nodes: 0, number of available pods: 0 +Oct 27 14:49:56.465: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"26522"},"items":null} + +Oct 27 14:49:56.470: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"26522"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:49:56.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-1747" for this suite. +•{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":346,"completed":203,"skipped":3486,"failed":0} +SS +------------------------------ +[sig-storage] EmptyDir wrapper volumes + should not cause race condition when used for configmaps [Serial] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir wrapper volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:49:56.509: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir-wrapper +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-wrapper-1679 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not cause race condition when used for configmaps [Serial] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating 50 configmaps +STEP: Creating RC which spawns configmap-volume pods +Oct 27 14:49:56.915: INFO: Pod name wrapped-volume-race-cbb13b2c-a637-44cc-b039-48dd3caf9ed1: Found 0 pods out of 5 +Oct 27 14:50:01.931: INFO: Pod name wrapped-volume-race-cbb13b2c-a637-44cc-b039-48dd3caf9ed1: Found 5 pods out of 5 +STEP: Ensuring each pod is running +STEP: deleting ReplicationController wrapped-volume-race-cbb13b2c-a637-44cc-b039-48dd3caf9ed1 in namespace emptydir-wrapper-1679, will wait for the garbage collector to delete the pods +Oct 27 14:50:02.020: INFO: Deleting ReplicationController wrapped-volume-race-cbb13b2c-a637-44cc-b039-48dd3caf9ed1 took: 7.041196ms +Oct 27 14:50:02.121: INFO: Terminating ReplicationController wrapped-volume-race-cbb13b2c-a637-44cc-b039-48dd3caf9ed1 pods took: 100.419041ms +STEP: Creating RC which spawns configmap-volume pods +Oct 27 14:50:04.468: INFO: Pod name wrapped-volume-race-1df53654-4a03-4a11-93b3-6029443ef7d0: Found 0 pods out of 5 +Oct 27 14:50:09.482: INFO: Pod name wrapped-volume-race-1df53654-4a03-4a11-93b3-6029443ef7d0: Found 5 pods out of 5 +STEP: Ensuring each pod is running +STEP: deleting ReplicationController wrapped-volume-race-1df53654-4a03-4a11-93b3-6029443ef7d0 in namespace emptydir-wrapper-1679, will wait for the garbage collector to delete the pods +Oct 27 14:50:09.570: INFO: Deleting ReplicationController wrapped-volume-race-1df53654-4a03-4a11-93b3-6029443ef7d0 took: 7.972949ms +Oct 27 14:50:09.672: INFO: Terminating ReplicationController wrapped-volume-race-1df53654-4a03-4a11-93b3-6029443ef7d0 pods took: 101.237903ms +STEP: Creating RC which spawns configmap-volume pods +Oct 27 14:50:10.993: INFO: Pod name wrapped-volume-race-1d91c2e9-8ab3-452f-b213-bffa94fe55ba: Found 0 pods out of 5 +Oct 27 14:50:16.011: INFO: Pod name wrapped-volume-race-1d91c2e9-8ab3-452f-b213-bffa94fe55ba: Found 5 pods out of 5 +STEP: Ensuring each pod is running +STEP: deleting ReplicationController wrapped-volume-race-1d91c2e9-8ab3-452f-b213-bffa94fe55ba in namespace emptydir-wrapper-1679, will wait for the garbage collector to delete the pods +Oct 27 14:50:16.102: INFO: Deleting ReplicationController wrapped-volume-race-1d91c2e9-8ab3-452f-b213-bffa94fe55ba took: 8.199115ms +Oct 27 14:50:16.203: INFO: Terminating ReplicationController wrapped-volume-race-1d91c2e9-8ab3-452f-b213-bffa94fe55ba pods took: 100.818697ms +STEP: Cleaning up the configMaps +[AfterEach] [sig-storage] EmptyDir wrapper volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:50:18.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-wrapper-1679" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":346,"completed":204,"skipped":3488,"failed":0} +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Deployment + RollingUpdateDeployment should delete old pods and create new ones [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:50:18.183: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename deployment +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in deployment-1660 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 +[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:50:18.335: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) +Oct 27 14:50:18.345: INFO: Pod name sample-pod: Found 0 pods out of 1 +Oct 27 14:50:23.353: INFO: Pod name sample-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running +Oct 27 14:50:23.353: INFO: Creating deployment "test-rolling-update-deployment" +Oct 27 14:50:23.359: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has +Oct 27 14:50:23.370: INFO: deployment "test-rolling-update-deployment" doesn't have the required revision set +Oct 27 14:50:25.380: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected +Oct 27 14:50:25.385: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 +Oct 27 14:50:25.398: INFO: Deployment "test-rolling-update-deployment": +&Deployment{ObjectMeta:{test-rolling-update-deployment deployment-1660 a3120f62-29c2-4b09-aeaa-d219b8c064e0 26975 1 2021-10-27 14:50:23 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2021-10-27 14:50:23 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 14:50:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc000cfa7d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-10-27 14:50:23 +0000 UTC,LastTransitionTime:2021-10-27 14:50:23 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-585b757574" has successfully progressed.,LastUpdateTime:2021-10-27 14:50:24 +0000 UTC,LastTransitionTime:2021-10-27 14:50:23 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} + +Oct 27 14:50:25.403: INFO: New ReplicaSet "test-rolling-update-deployment-585b757574" of Deployment "test-rolling-update-deployment": +&ReplicaSet{ObjectMeta:{test-rolling-update-deployment-585b757574 deployment-1660 7dcc8dc8-9acf-40f6-bc5b-dfbcf870b36b 26968 1 2021-10-27 14:50:23 +0000 UTC map[name:sample-pod pod-template-hash:585b757574] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment a3120f62-29c2-4b09-aeaa-d219b8c064e0 0xc0066cd837 0xc0066cd838}] [] [{kube-controller-manager Update apps/v1 2021-10-27 14:50:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a3120f62-29c2-4b09-aeaa-d219b8c064e0\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 14:50:24 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 585b757574,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:585b757574] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0066cd8e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} +Oct 27 14:50:25.404: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": +Oct 27 14:50:25.404: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-1660 f23d8171-d805-4807-a337-d45122ed168a 26974 2 2021-10-27 14:50:18 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment a3120f62-29c2-4b09-aeaa-d219b8c064e0 0xc0066cd6e7 0xc0066cd6e8}] [] [{e2e.test Update apps/v1 2021-10-27 14:50:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 14:50:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a3120f62-29c2-4b09-aeaa-d219b8c064e0\"}":{}}},"f:spec":{"f:replicas":{}}} } {kube-controller-manager Update apps/v1 2021-10-27 14:50:24 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0066cd7a8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Oct 27 14:50:25.409: INFO: Pod "test-rolling-update-deployment-585b757574-n4d7t" is available: +&Pod{ObjectMeta:{test-rolling-update-deployment-585b757574-n4d7t test-rolling-update-deployment-585b757574- deployment-1660 bb7a7048-5800-4650-a3ad-1305eb337693 26967 0 2021-10-27 14:50:23 +0000 UTC map[name:sample-pod pod-template-hash:585b757574] map[cni.projectcalico.org/containerID:96be652b888d83ad84815fdb8a7c892c80cae357d7d03456bd61a30efc7a7743 cni.projectcalico.org/podIP:172.16.1.234/32 cni.projectcalico.org/podIPs:172.16.1.234/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet test-rolling-update-deployment-585b757574 7dcc8dc8-9acf-40f6-bc5b-dfbcf870b36b 0xc0066cdd47 0xc0066cdd48}] [] [{calico Update v1 2021-10-27 14:50:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kube-controller-manager Update v1 2021-10-27 14:50:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7dcc8dc8-9acf-40f6-bc5b-dfbcf870b36b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 14:50:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.16.1.234\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-47fn9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmanu-jzf.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-47fn9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:izgw89f23rpcwrl79tpgp1z,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:50:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:50:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:50:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:50:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.8.35,PodIP:172.16.1.234,StartTime:2021-10-27 14:50:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-27 14:50:24 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:containerd://d409a87fff5cb3f374ccf0aa043d32f9a28fbc3623f92eeada2926a4c563c6b2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.16.1.234,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:50:25.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-1660" for this suite. +•{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":346,"completed":205,"skipped":3507,"failed":0} +SS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + removes definition from spec when one version gets changed to not be served [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:50:25.422: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-6222 +STEP: Waiting for a default service account to be provisioned in namespace +[It] removes definition from spec when one version gets changed to not be served [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: set up a multi version CRD +Oct 27 14:50:25.584: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: mark a version not serverd +STEP: check the unserved version gets removed +STEP: check the other version is not changed +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:50:44.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-6222" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":346,"completed":206,"skipped":3509,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Probing container + with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:50:44.119: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-probe +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-3163 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 +[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:50:44.285: INFO: The status of Pod test-webserver-cec4ddb6-beb6-4414-a75f-da207d9424c9 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:50:46.291: INFO: The status of Pod test-webserver-cec4ddb6-beb6-4414-a75f-da207d9424c9 is Running (Ready = false) +Oct 27 14:50:48.292: INFO: The status of Pod test-webserver-cec4ddb6-beb6-4414-a75f-da207d9424c9 is Running (Ready = false) +Oct 27 14:50:50.295: INFO: The status of Pod test-webserver-cec4ddb6-beb6-4414-a75f-da207d9424c9 is Running (Ready = false) +Oct 27 14:50:52.292: INFO: The status of Pod test-webserver-cec4ddb6-beb6-4414-a75f-da207d9424c9 is Running (Ready = false) +Oct 27 14:50:54.290: INFO: The status of Pod test-webserver-cec4ddb6-beb6-4414-a75f-da207d9424c9 is Running (Ready = false) +Oct 27 14:50:56.292: INFO: The status of Pod test-webserver-cec4ddb6-beb6-4414-a75f-da207d9424c9 is Running (Ready = false) +Oct 27 14:50:58.291: INFO: The status of Pod test-webserver-cec4ddb6-beb6-4414-a75f-da207d9424c9 is Running (Ready = false) +Oct 27 14:51:00.291: INFO: The status of Pod test-webserver-cec4ddb6-beb6-4414-a75f-da207d9424c9 is Running (Ready = false) +Oct 27 14:51:02.292: INFO: The status of Pod test-webserver-cec4ddb6-beb6-4414-a75f-da207d9424c9 is Running (Ready = false) +Oct 27 14:51:04.290: INFO: The status of Pod test-webserver-cec4ddb6-beb6-4414-a75f-da207d9424c9 is Running (Ready = false) +Oct 27 14:51:06.292: INFO: The status of Pod test-webserver-cec4ddb6-beb6-4414-a75f-da207d9424c9 is Running (Ready = true) +Oct 27 14:51:06.296: INFO: Container started at 2021-10-27 14:50:45 +0000 UTC, pod became ready at 2021-10-27 14:51:04 +0000 UTC +[AfterEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:51:06.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-3163" for this suite. +•{"msg":"PASSED [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":346,"completed":207,"skipped":3532,"failed":0} +SSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide container's cpu request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:51:06.309: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-7701 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should provide container's cpu request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 27 14:51:06.472: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3c7d5f7d-bc79-47ca-9b52-7fe799bb19e8" in namespace "downward-api-7701" to be "Succeeded or Failed" +Oct 27 14:51:06.477: INFO: Pod "downwardapi-volume-3c7d5f7d-bc79-47ca-9b52-7fe799bb19e8": Phase="Pending", Reason="", readiness=false. Elapsed: 5.079309ms +Oct 27 14:51:08.484: INFO: Pod "downwardapi-volume-3c7d5f7d-bc79-47ca-9b52-7fe799bb19e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011779064s +STEP: Saw pod success +Oct 27 14:51:08.484: INFO: Pod "downwardapi-volume-3c7d5f7d-bc79-47ca-9b52-7fe799bb19e8" satisfied condition "Succeeded or Failed" +Oct 27 14:51:08.489: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod downwardapi-volume-3c7d5f7d-bc79-47ca-9b52-7fe799bb19e8 container client-container: +STEP: delete the pod +Oct 27 14:51:08.516: INFO: Waiting for pod downwardapi-volume-3c7d5f7d-bc79-47ca-9b52-7fe799bb19e8 to disappear +Oct 27 14:51:08.521: INFO: Pod downwardapi-volume-3c7d5f7d-bc79-47ca-9b52-7fe799bb19e8 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:51:08.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-7701" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":346,"completed":208,"skipped":3540,"failed":0} +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should honor timeout [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:51:08.534: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-3921 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 27 14:51:09.187: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 27 14:51:12.212: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should honor timeout [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Setting timeout (1s) shorter than webhook latency (5s) +STEP: Registering slow webhook via the AdmissionRegistration API +STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) +STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore +STEP: Registering slow webhook via the AdmissionRegistration API +STEP: Having no error when timeout is longer than webhook latency +STEP: Registering slow webhook via the AdmissionRegistration API +STEP: Having no error when timeout is empty (defaulted to 10s in v1) +STEP: Registering slow webhook via the AdmissionRegistration API +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:51:24.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-3921" for this suite. +STEP: Destroying namespace "webhook-3921-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":346,"completed":209,"skipped":3557,"failed":0} +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Watchers + should be able to restart watching from the last resource version observed by the previous watch [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:51:24.647: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename watch +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in watch-4525 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a watch on configmaps +STEP: creating a new configmap +STEP: modifying the configmap once +STEP: closing the watch once it receives two notifications +Oct 27 14:51:24.813: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4525 0c6783b7-c984-478b-9c30-73a140f632bb 27405 0 2021-10-27 14:51:24 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-10-27 14:51:24 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 27 14:51:24.813: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4525 0c6783b7-c984-478b-9c30-73a140f632bb 27406 0 2021-10-27 14:51:24 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-10-27 14:51:24 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: modifying the configmap a second time, while the watch is closed +STEP: creating a new watch on configmaps from the last resource version observed by the first watch +STEP: deleting the configmap +STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed +Oct 27 14:51:24.831: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4525 0c6783b7-c984-478b-9c30-73a140f632bb 27407 0 2021-10-27 14:51:24 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-10-27 14:51:24 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 27 14:51:24.832: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4525 0c6783b7-c984-478b-9c30-73a140f632bb 27408 0 2021-10-27 14:51:24 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-10-27 14:51:24 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +[AfterEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:51:24.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "watch-4525" for this suite. +•{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":346,"completed":210,"skipped":3577,"failed":0} +SS +------------------------------ +[sig-storage] Secrets + should be immutable if `immutable` field is set [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:51:24.843: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-5695 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be immutable if `immutable` field is set [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:51:25.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-5695" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]","total":346,"completed":211,"skipped":3579,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a secret. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:51:25.044: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename resourcequota +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-7386 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should create a ResourceQuota and capture the life of a secret. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Discovering how many secrets are in namespace by default +STEP: Counting existing ResourceQuota +STEP: Creating a ResourceQuota +STEP: Ensuring resource quota status is calculated +STEP: Creating a Secret +STEP: Ensuring resource quota status captures secret creation +STEP: Deleting a secret +STEP: Ensuring resource quota status released usage +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:51:42.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-7386" for this suite. +•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":346,"completed":212,"skipped":3612,"failed":0} +SSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide container's cpu request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:51:42.256: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-1558 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should provide container's cpu request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 27 14:51:42.425: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6b099f12-a867-4947-bc8f-66bc39c543f5" in namespace "projected-1558" to be "Succeeded or Failed" +Oct 27 14:51:42.430: INFO: Pod "downwardapi-volume-6b099f12-a867-4947-bc8f-66bc39c543f5": Phase="Pending", Reason="", readiness=false. Elapsed: 5.206703ms +Oct 27 14:51:44.436: INFO: Pod "downwardapi-volume-6b099f12-a867-4947-bc8f-66bc39c543f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.01157805s +STEP: Saw pod success +Oct 27 14:51:44.436: INFO: Pod "downwardapi-volume-6b099f12-a867-4947-bc8f-66bc39c543f5" satisfied condition "Succeeded or Failed" +Oct 27 14:51:44.441: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod downwardapi-volume-6b099f12-a867-4947-bc8f-66bc39c543f5 container client-container: +STEP: delete the pod +Oct 27 14:51:44.459: INFO: Waiting for pod downwardapi-volume-6b099f12-a867-4947-bc8f-66bc39c543f5 to disappear +Oct 27 14:51:44.463: INFO: Pod downwardapi-volume-6b099f12-a867-4947-bc8f-66bc39c543f5 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:51:44.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-1558" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":346,"completed":213,"skipped":3621,"failed":0} +SSSSSSS +------------------------------ +[sig-apps] ReplicaSet + should serve a basic image on each replica with a public image [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:51:44.477: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename replicaset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replicaset-3015 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should serve a basic image on each replica with a public image [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:51:44.621: INFO: Creating ReplicaSet my-hostname-basic-d48d3c9d-27ea-4b70-bc77-ca347a5f3541 +Oct 27 14:51:44.632: INFO: Pod name my-hostname-basic-d48d3c9d-27ea-4b70-bc77-ca347a5f3541: Found 0 pods out of 1 +Oct 27 14:51:49.637: INFO: Pod name my-hostname-basic-d48d3c9d-27ea-4b70-bc77-ca347a5f3541: Found 1 pods out of 1 +Oct 27 14:51:49.638: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-d48d3c9d-27ea-4b70-bc77-ca347a5f3541" is running +Oct 27 14:51:49.642: INFO: Pod "my-hostname-basic-d48d3c9d-27ea-4b70-bc77-ca347a5f3541-dd7zx" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-27 14:51:44 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-27 14:51:45 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-27 14:51:45 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-27 14:51:44 +0000 UTC Reason: Message:}]) +Oct 27 14:51:49.642: INFO: Trying to dial the pod +Oct 27 14:51:54.755: INFO: Controller my-hostname-basic-d48d3c9d-27ea-4b70-bc77-ca347a5f3541: Got expected result from replica 1 [my-hostname-basic-d48d3c9d-27ea-4b70-bc77-ca347a5f3541-dd7zx]: "my-hostname-basic-d48d3c9d-27ea-4b70-bc77-ca347a5f3541-dd7zx", 1 of 1 required successes so far +[AfterEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:51:54.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replicaset-3015" for this suite. +•{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":346,"completed":214,"skipped":3628,"failed":0} + +------------------------------ +[sig-api-machinery] Garbage collector + should delete RS created by deployment when not orphaning [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:51:54.768: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename gc +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-3200 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should delete RS created by deployment when not orphaning [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the deployment +STEP: Wait for the Deployment to create new ReplicaSet +STEP: delete the deployment +STEP: wait for all rs to be garbage collected +STEP: expected 0 rs, got 1 rs +STEP: expected 0 pods, got 1 pods +STEP: Gathering metrics +Oct 27 14:51:55.476: INFO: For apiserver_request_total: +For apiserver_request_latency_seconds: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +W1027 14:51:55.476097 5703 metrics_grabber.go:151] Can't find kube-controller-manager pod. Grabbing metrics from kube-controller-manager is disabled. +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:51:55.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-3200" for this suite. +•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":346,"completed":215,"skipped":3628,"failed":0} +SSSSSSSSS +------------------------------ +[sig-instrumentation] Events API + should delete a collection of events [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-instrumentation] Events API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:51:55.487: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename events +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in events-5974 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-instrumentation] Events API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 +[It] should delete a collection of events [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Create set of events +STEP: get a list of Events with a label in the current namespace +STEP: delete a list of events +Oct 27 14:51:55.658: INFO: requesting DeleteCollection of events +STEP: check that the list of events matches the requested quantity +[AfterEach] [sig-instrumentation] Events API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:51:55.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "events-5974" for this suite. +•{"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":346,"completed":216,"skipped":3637,"failed":0} +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Probing container + should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:51:55.689: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-probe +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-5130 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 +[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating pod test-webserver-672f7613-b830-46a2-ad2b-9716fe714fc4 in namespace container-probe-5130 +Oct 27 14:51:57.862: INFO: Started pod test-webserver-672f7613-b830-46a2-ad2b-9716fe714fc4 in namespace container-probe-5130 +STEP: checking the pod's current state and verifying that restartCount is present +Oct 27 14:51:57.867: INFO: Initial restart count of pod test-webserver-672f7613-b830-46a2-ad2b-9716fe714fc4 is 0 +STEP: deleting the pod +[AfterEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:55:58.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-5130" for this suite. +•{"msg":"PASSED [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":346,"completed":217,"skipped":3654,"failed":0} +SSSSS +------------------------------ +[sig-network] Services + should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:55:58.699: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-1094 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating service in namespace services-1094 +STEP: creating service affinity-nodeport-transition in namespace services-1094 +STEP: creating replication controller affinity-nodeport-transition in namespace services-1094 +I1027 14:55:58.865663 5703 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-1094, replica count: 3 +I1027 14:56:01.917846 5703 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Oct 27 14:56:01.935: INFO: Creating new exec pod +Oct 27 14:56:04.964: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-1094 exec execpod-affinityg8687 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' +Oct 27 14:56:05.610: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" +Oct 27 14:56:05.610: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 14:56:05.610: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-1094 exec execpod-affinityg8687 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.25.10.59 80' +Oct 27 14:56:05.906: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.25.10.59 80\nConnection to 172.25.10.59 80 port [tcp/http] succeeded!\n" +Oct 27 14:56:05.906: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 14:56:05.907: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-1094 exec execpod-affinityg8687 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.250.8.34 31330' +Oct 27 14:56:06.184: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.250.8.34 31330\nConnection to 10.250.8.34 31330 port [tcp/*] succeeded!\n" +Oct 27 14:56:06.184: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 14:56:06.184: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-1094 exec execpod-affinityg8687 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.250.8.35 31330' +Oct 27 14:56:06.503: INFO: stderr: "+ nc -v -t -w 2 10.250.8.35 31330\n+ echo hostName\nConnection to 10.250.8.35 31330 port [tcp/*] succeeded!\n" +Oct 27 14:56:06.504: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 14:56:06.515: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-1094 exec execpod-affinityg8687 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.250.8.34:31330/ ; done' +Oct 27 14:56:06.880: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31330/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31330/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31330/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31330/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31330/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31330/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31330/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31330/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31330/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31330/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31330/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31330/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31330/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31330/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31330/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31330/\n" +Oct 27 14:56:06.880: INFO: stdout: "\naffinity-nodeport-transition-92kdc\naffinity-nodeport-transition-92kdc\naffinity-nodeport-transition-92kdc\naffinity-nodeport-transition-92kdc\naffinity-nodeport-transition-92kdc\naffinity-nodeport-transition-92kdc\naffinity-nodeport-transition-92kdc\naffinity-nodeport-transition-92kdc\naffinity-nodeport-transition-92kdc\naffinity-nodeport-transition-92kdc\naffinity-nodeport-transition-92kdc\naffinity-nodeport-transition-92kdc\naffinity-nodeport-transition-92kdc\naffinity-nodeport-transition-92kdc\naffinity-nodeport-transition-92kdc\naffinity-nodeport-transition-92kdc" +Oct 27 14:56:06.880: INFO: Received response from host: affinity-nodeport-transition-92kdc +Oct 27 14:56:06.880: INFO: Received response from host: affinity-nodeport-transition-92kdc +Oct 27 14:56:06.880: INFO: Received response from host: affinity-nodeport-transition-92kdc +Oct 27 14:56:06.880: INFO: Received response from host: affinity-nodeport-transition-92kdc +Oct 27 14:56:06.880: INFO: Received response from host: affinity-nodeport-transition-92kdc +Oct 27 14:56:06.880: INFO: Received response from host: affinity-nodeport-transition-92kdc +Oct 27 14:56:06.880: INFO: Received response from host: affinity-nodeport-transition-92kdc +Oct 27 14:56:06.880: INFO: Received response from host: affinity-nodeport-transition-92kdc +Oct 27 14:56:06.880: INFO: Received response from host: affinity-nodeport-transition-92kdc +Oct 27 14:56:06.880: INFO: Received response from host: affinity-nodeport-transition-92kdc +Oct 27 14:56:06.880: INFO: Received response from host: affinity-nodeport-transition-92kdc +Oct 27 14:56:06.880: INFO: Received response from host: affinity-nodeport-transition-92kdc +Oct 27 14:56:06.880: INFO: Received response from host: affinity-nodeport-transition-92kdc +Oct 27 14:56:06.880: INFO: Received response from host: affinity-nodeport-transition-92kdc +Oct 27 14:56:06.880: INFO: Received response from host: affinity-nodeport-transition-92kdc +Oct 27 14:56:06.880: INFO: Received response from host: affinity-nodeport-transition-92kdc +Oct 27 14:56:36.882: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-1094 exec execpod-affinityg8687 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.250.8.34:31330/ ; done' +Oct 27 14:56:37.213: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31330/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31330/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31330/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31330/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31330/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31330/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31330/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31330/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31330/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31330/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31330/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31330/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31330/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31330/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31330/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31330/\n" +Oct 27 14:56:37.213: INFO: stdout: "\naffinity-nodeport-transition-6b87p\naffinity-nodeport-transition-92kdc\naffinity-nodeport-transition-6b87p\naffinity-nodeport-transition-92kdc\naffinity-nodeport-transition-92kdc\naffinity-nodeport-transition-kcmtr\naffinity-nodeport-transition-6b87p\naffinity-nodeport-transition-kcmtr\naffinity-nodeport-transition-kcmtr\naffinity-nodeport-transition-kcmtr\naffinity-nodeport-transition-92kdc\naffinity-nodeport-transition-6b87p\naffinity-nodeport-transition-kcmtr\naffinity-nodeport-transition-92kdc\naffinity-nodeport-transition-kcmtr\naffinity-nodeport-transition-6b87p" +Oct 27 14:56:37.213: INFO: Received response from host: affinity-nodeport-transition-6b87p +Oct 27 14:56:37.213: INFO: Received response from host: affinity-nodeport-transition-92kdc +Oct 27 14:56:37.213: INFO: Received response from host: affinity-nodeport-transition-6b87p +Oct 27 14:56:37.213: INFO: Received response from host: affinity-nodeport-transition-92kdc +Oct 27 14:56:37.213: INFO: Received response from host: affinity-nodeport-transition-92kdc +Oct 27 14:56:37.213: INFO: Received response from host: affinity-nodeport-transition-kcmtr +Oct 27 14:56:37.213: INFO: Received response from host: affinity-nodeport-transition-6b87p +Oct 27 14:56:37.213: INFO: Received response from host: affinity-nodeport-transition-kcmtr +Oct 27 14:56:37.213: INFO: Received response from host: affinity-nodeport-transition-kcmtr +Oct 27 14:56:37.213: INFO: Received response from host: affinity-nodeport-transition-kcmtr +Oct 27 14:56:37.213: INFO: Received response from host: affinity-nodeport-transition-92kdc +Oct 27 14:56:37.213: INFO: Received response from host: affinity-nodeport-transition-6b87p +Oct 27 14:56:37.213: INFO: Received response from host: affinity-nodeport-transition-kcmtr +Oct 27 14:56:37.213: INFO: Received response from host: affinity-nodeport-transition-92kdc +Oct 27 14:56:37.213: INFO: Received response from host: affinity-nodeport-transition-kcmtr +Oct 27 14:56:37.213: INFO: Received response from host: affinity-nodeport-transition-6b87p +Oct 27 14:56:37.231: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-1094 exec execpod-affinityg8687 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.250.8.34:31330/ ; done' +Oct 27 14:56:37.573: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31330/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31330/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31330/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31330/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31330/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31330/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31330/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31330/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31330/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31330/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31330/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31330/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31330/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31330/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31330/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31330/\n" +Oct 27 14:56:37.573: INFO: stdout: "\naffinity-nodeport-transition-6b87p\naffinity-nodeport-transition-6b87p\naffinity-nodeport-transition-kcmtr\naffinity-nodeport-transition-6b87p\naffinity-nodeport-transition-6b87p\naffinity-nodeport-transition-92kdc\naffinity-nodeport-transition-kcmtr\naffinity-nodeport-transition-kcmtr\naffinity-nodeport-transition-6b87p\naffinity-nodeport-transition-6b87p\naffinity-nodeport-transition-92kdc\naffinity-nodeport-transition-kcmtr\naffinity-nodeport-transition-kcmtr\naffinity-nodeport-transition-6b87p\naffinity-nodeport-transition-92kdc\naffinity-nodeport-transition-kcmtr" +Oct 27 14:56:37.573: INFO: Received response from host: affinity-nodeport-transition-6b87p +Oct 27 14:56:37.573: INFO: Received response from host: affinity-nodeport-transition-6b87p +Oct 27 14:56:37.573: INFO: Received response from host: affinity-nodeport-transition-kcmtr +Oct 27 14:56:37.573: INFO: Received response from host: affinity-nodeport-transition-6b87p +Oct 27 14:56:37.573: INFO: Received response from host: affinity-nodeport-transition-6b87p +Oct 27 14:56:37.573: INFO: Received response from host: affinity-nodeport-transition-92kdc +Oct 27 14:56:37.573: INFO: Received response from host: affinity-nodeport-transition-kcmtr +Oct 27 14:56:37.573: INFO: Received response from host: affinity-nodeport-transition-kcmtr +Oct 27 14:56:37.573: INFO: Received response from host: affinity-nodeport-transition-6b87p +Oct 27 14:56:37.573: INFO: Received response from host: affinity-nodeport-transition-6b87p +Oct 27 14:56:37.573: INFO: Received response from host: affinity-nodeport-transition-92kdc +Oct 27 14:56:37.573: INFO: Received response from host: affinity-nodeport-transition-kcmtr +Oct 27 14:56:37.573: INFO: Received response from host: affinity-nodeport-transition-kcmtr +Oct 27 14:56:37.573: INFO: Received response from host: affinity-nodeport-transition-6b87p +Oct 27 14:56:37.573: INFO: Received response from host: affinity-nodeport-transition-92kdc +Oct 27 14:56:37.573: INFO: Received response from host: affinity-nodeport-transition-kcmtr +Oct 27 14:57:07.576: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-1094 exec execpod-affinityg8687 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.250.8.34:31330/ ; done' +Oct 27 14:57:07.979: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31330/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31330/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31330/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31330/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31330/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31330/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31330/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31330/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31330/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31330/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31330/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31330/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31330/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31330/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31330/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:31330/\n" +Oct 27 14:57:07.979: INFO: stdout: "\naffinity-nodeport-transition-92kdc\naffinity-nodeport-transition-92kdc\naffinity-nodeport-transition-92kdc\naffinity-nodeport-transition-92kdc\naffinity-nodeport-transition-92kdc\naffinity-nodeport-transition-92kdc\naffinity-nodeport-transition-92kdc\naffinity-nodeport-transition-92kdc\naffinity-nodeport-transition-92kdc\naffinity-nodeport-transition-92kdc\naffinity-nodeport-transition-92kdc\naffinity-nodeport-transition-92kdc\naffinity-nodeport-transition-92kdc\naffinity-nodeport-transition-92kdc\naffinity-nodeport-transition-92kdc\naffinity-nodeport-transition-92kdc" +Oct 27 14:57:07.979: INFO: Received response from host: affinity-nodeport-transition-92kdc +Oct 27 14:57:07.979: INFO: Received response from host: affinity-nodeport-transition-92kdc +Oct 27 14:57:07.979: INFO: Received response from host: affinity-nodeport-transition-92kdc +Oct 27 14:57:07.979: INFO: Received response from host: affinity-nodeport-transition-92kdc +Oct 27 14:57:07.979: INFO: Received response from host: affinity-nodeport-transition-92kdc +Oct 27 14:57:07.979: INFO: Received response from host: affinity-nodeport-transition-92kdc +Oct 27 14:57:07.979: INFO: Received response from host: affinity-nodeport-transition-92kdc +Oct 27 14:57:07.979: INFO: Received response from host: affinity-nodeport-transition-92kdc +Oct 27 14:57:07.979: INFO: Received response from host: affinity-nodeport-transition-92kdc +Oct 27 14:57:07.979: INFO: Received response from host: affinity-nodeport-transition-92kdc +Oct 27 14:57:07.979: INFO: Received response from host: affinity-nodeport-transition-92kdc +Oct 27 14:57:07.979: INFO: Received response from host: affinity-nodeport-transition-92kdc +Oct 27 14:57:07.979: INFO: Received response from host: affinity-nodeport-transition-92kdc +Oct 27 14:57:07.979: INFO: Received response from host: affinity-nodeport-transition-92kdc +Oct 27 14:57:07.979: INFO: Received response from host: affinity-nodeport-transition-92kdc +Oct 27 14:57:07.979: INFO: Received response from host: affinity-nodeport-transition-92kdc +Oct 27 14:57:07.979: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-1094, will wait for the garbage collector to delete the pods +Oct 27 14:57:08.053: INFO: Deleting ReplicationController affinity-nodeport-transition took: 6.611425ms +Oct 27 14:57:08.154: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 100.732861ms +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:57:10.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-1094" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":346,"completed":218,"skipped":3659,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:57:10.281: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-8586 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating projection with secret that has name projected-secret-test-map-2681be86-1227-47b7-956f-a327d3243a57 +STEP: Creating a pod to test consume secrets +Oct 27 14:57:10.450: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2ffb1034-c75e-47c8-b42b-473c2d20c752" in namespace "projected-8586" to be "Succeeded or Failed" +Oct 27 14:57:10.455: INFO: Pod "pod-projected-secrets-2ffb1034-c75e-47c8-b42b-473c2d20c752": Phase="Pending", Reason="", readiness=false. Elapsed: 5.344595ms +Oct 27 14:57:12.461: INFO: Pod "pod-projected-secrets-2ffb1034-c75e-47c8-b42b-473c2d20c752": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011198014s +STEP: Saw pod success +Oct 27 14:57:12.461: INFO: Pod "pod-projected-secrets-2ffb1034-c75e-47c8-b42b-473c2d20c752" satisfied condition "Succeeded or Failed" +Oct 27 14:57:12.466: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod pod-projected-secrets-2ffb1034-c75e-47c8-b42b-473c2d20c752 container projected-secret-volume-test: +STEP: delete the pod +Oct 27 14:57:12.488: INFO: Waiting for pod pod-projected-secrets-2ffb1034-c75e-47c8-b42b-473c2d20c752 to disappear +Oct 27 14:57:12.492: INFO: Pod pod-projected-secrets-2ffb1034-c75e-47c8-b42b-473c2d20c752 no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:57:12.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-8586" for this suite. +•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":346,"completed":219,"skipped":3699,"failed":0} +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with secret pod [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:57:12.507: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename subpath +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in subpath-8919 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] Atomic writer volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 +STEP: Setting up data +[It] should support subpaths with secret pod [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating pod pod-subpath-test-secret-42fz +STEP: Creating a pod to test atomic-volume-subpath +Oct 27 14:57:12.677: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-42fz" in namespace "subpath-8919" to be "Succeeded or Failed" +Oct 27 14:57:12.681: INFO: Pod "pod-subpath-test-secret-42fz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.561314ms +Oct 27 14:57:14.688: INFO: Pod "pod-subpath-test-secret-42fz": Phase="Running", Reason="", readiness=true. Elapsed: 2.011123614s +Oct 27 14:57:16.694: INFO: Pod "pod-subpath-test-secret-42fz": Phase="Running", Reason="", readiness=true. Elapsed: 4.017221317s +Oct 27 14:57:18.701: INFO: Pod "pod-subpath-test-secret-42fz": Phase="Running", Reason="", readiness=true. Elapsed: 6.024009563s +Oct 27 14:57:20.707: INFO: Pod "pod-subpath-test-secret-42fz": Phase="Running", Reason="", readiness=true. Elapsed: 8.03004797s +Oct 27 14:57:22.716: INFO: Pod "pod-subpath-test-secret-42fz": Phase="Running", Reason="", readiness=true. Elapsed: 10.038875526s +Oct 27 14:57:24.721: INFO: Pod "pod-subpath-test-secret-42fz": Phase="Running", Reason="", readiness=true. Elapsed: 12.044509906s +Oct 27 14:57:26.728: INFO: Pod "pod-subpath-test-secret-42fz": Phase="Running", Reason="", readiness=true. Elapsed: 14.051302729s +Oct 27 14:57:28.735: INFO: Pod "pod-subpath-test-secret-42fz": Phase="Running", Reason="", readiness=true. Elapsed: 16.057883269s +Oct 27 14:57:30.741: INFO: Pod "pod-subpath-test-secret-42fz": Phase="Running", Reason="", readiness=true. Elapsed: 18.064567852s +Oct 27 14:57:32.747: INFO: Pod "pod-subpath-test-secret-42fz": Phase="Running", Reason="", readiness=true. Elapsed: 20.070270641s +Oct 27 14:57:34.754: INFO: Pod "pod-subpath-test-secret-42fz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.076752957s +STEP: Saw pod success +Oct 27 14:57:34.754: INFO: Pod "pod-subpath-test-secret-42fz" satisfied condition "Succeeded or Failed" +Oct 27 14:57:34.758: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod pod-subpath-test-secret-42fz container test-container-subpath-secret-42fz: +STEP: delete the pod +Oct 27 14:57:34.778: INFO: Waiting for pod pod-subpath-test-secret-42fz to disappear +Oct 27 14:57:34.782: INFO: Pod pod-subpath-test-secret-42fz no longer exists +STEP: Deleting pod pod-subpath-test-secret-42fz +Oct 27 14:57:34.782: INFO: Deleting pod "pod-subpath-test-secret-42fz" in namespace "subpath-8919" +[AfterEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:57:34.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "subpath-8919" for this suite. +•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":346,"completed":220,"skipped":3717,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should update labels on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:57:34.801: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-1744 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should update labels on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating the pod +Oct 27 14:57:34.965: INFO: The status of Pod labelsupdate248dbb97-e081-4172-9e22-4c57b690c75d is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:57:36.971: INFO: The status of Pod labelsupdate248dbb97-e081-4172-9e22-4c57b690c75d is Running (Ready = true) +Oct 27 14:57:37.504: INFO: Successfully updated pod "labelsupdate248dbb97-e081-4172-9e22-4c57b690c75d" +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:57:41.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-1744" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":346,"completed":221,"skipped":3789,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl expose + should create services for rc [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:57:41.553: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-8858 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should create services for rc [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating Agnhost RC +Oct 27 14:57:41.698: INFO: namespace kubectl-8858 +Oct 27 14:57:41.698: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-8858 create -f -' +Oct 27 14:57:41.884: INFO: stderr: "" +Oct 27 14:57:41.884: INFO: stdout: "replicationcontroller/agnhost-primary created\n" +STEP: Waiting for Agnhost primary to start. +Oct 27 14:57:42.891: INFO: Selector matched 1 pods for map[app:agnhost] +Oct 27 14:57:42.891: INFO: Found 1 / 1 +Oct 27 14:57:42.891: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 +Oct 27 14:57:42.895: INFO: Selector matched 1 pods for map[app:agnhost] +Oct 27 14:57:42.895: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +Oct 27 14:57:42.895: INFO: wait on agnhost-primary startup in kubectl-8858 +Oct 27 14:57:42.895: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-8858 logs agnhost-primary-9jk2r agnhost-primary' +Oct 27 14:57:42.978: INFO: stderr: "" +Oct 27 14:57:42.978: INFO: stdout: "Paused\n" +STEP: exposing RC +Oct 27 14:57:42.978: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-8858 expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379' +Oct 27 14:57:43.083: INFO: stderr: "" +Oct 27 14:57:43.083: INFO: stdout: "service/rm2 exposed\n" +Oct 27 14:57:43.087: INFO: Service rm2 in namespace kubectl-8858 found. +STEP: exposing service +Oct 27 14:57:45.097: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-8858 expose service rm2 --name=rm3 --port=2345 --target-port=6379' +Oct 27 14:57:45.177: INFO: stderr: "" +Oct 27 14:57:45.177: INFO: stdout: "service/rm3 exposed\n" +Oct 27 14:57:45.181: INFO: Service rm3 in namespace kubectl-8858 found. +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:57:47.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-8858" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":346,"completed":222,"skipped":3829,"failed":0} +SSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + pod should support shared volumes between containers [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:57:47.204: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-2078 +STEP: Waiting for a default service account to be provisioned in namespace +[It] pod should support shared volumes between containers [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating Pod +STEP: Reading file content from the nginx-container +Oct 27 14:57:49.376: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-2078 PodName:pod-sharedvolume-0e20ddcd-03eb-4479-a9a7-e0844dc4f7c7 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 14:57:49.376: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 14:57:49.572: INFO: Exec stderr: "" +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:57:49.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-2078" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":346,"completed":223,"skipped":3838,"failed":0} +SSSSSSSS +------------------------------ +[sig-node] InitContainer [NodeConformance] + should invoke init containers on a RestartNever pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:57:49.586: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename init-container +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in init-container-1388 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 +[It] should invoke init containers on a RestartNever pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pod +Oct 27 14:57:49.739: INFO: PodSpec: initContainers in spec.initContainers +[AfterEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:57:52.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "init-container-1388" for this suite. +•{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":346,"completed":224,"skipped":3846,"failed":0} +SSSSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + should have a working scale subresource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:57:52.947: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename statefulset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-1691 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:92 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:107 +STEP: Creating service test in namespace statefulset-1691 +[It] should have a working scale subresource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating statefulset ss in namespace statefulset-1691 +Oct 27 14:57:53.108: INFO: Found 0 stateful pods, waiting for 1 +Oct 27 14:58:03.117: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: getting scale subresource +STEP: updating a scale subresource +STEP: verifying the statefulset Spec.Replicas was modified +STEP: Patch a scale subresource +STEP: verifying the statefulset Spec.Replicas was modified +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118 +Oct 27 14:58:03.151: INFO: Deleting all statefulset in ns statefulset-1691 +Oct 27 14:58:03.156: INFO: Scaling statefulset ss to 0 +Oct 27 14:58:13.177: INFO: Waiting for statefulset status.replicas updated to 0 +Oct 27 14:58:13.181: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:58:13.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-1691" for this suite. +•{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":346,"completed":225,"skipped":3852,"failed":0} + +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for multiple CRDs of same group but different versions [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:58:13.209: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-144 +STEP: Waiting for a default service account to be provisioned in namespace +[It] works for multiple CRDs of same group but different versions [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation +Oct 27 14:58:13.357: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation +Oct 27 14:58:26.743: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 14:58:30.360: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:58:44.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-144" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":346,"completed":226,"skipped":3852,"failed":0} +SSS +------------------------------ +[sig-network] Services + should find a service from listing all namespaces [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:58:44.477: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-8912 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should find a service from listing all namespaces [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: fetching services +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:58:44.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-8912" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":346,"completed":227,"skipped":3855,"failed":0} +SSSSSS +------------------------------ +[sig-storage] Downward API volume + should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:58:44.641: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-2391 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 27 14:58:44.799: INFO: Waiting up to 5m0s for pod "downwardapi-volume-53d800d6-7892-41fa-9ff0-cf1769db5a79" in namespace "downward-api-2391" to be "Succeeded or Failed" +Oct 27 14:58:44.803: INFO: Pod "downwardapi-volume-53d800d6-7892-41fa-9ff0-cf1769db5a79": Phase="Pending", Reason="", readiness=false. Elapsed: 4.569242ms +Oct 27 14:58:46.810: INFO: Pod "downwardapi-volume-53d800d6-7892-41fa-9ff0-cf1769db5a79": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010947957s +STEP: Saw pod success +Oct 27 14:58:46.810: INFO: Pod "downwardapi-volume-53d800d6-7892-41fa-9ff0-cf1769db5a79" satisfied condition "Succeeded or Failed" +Oct 27 14:58:46.815: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod downwardapi-volume-53d800d6-7892-41fa-9ff0-cf1769db5a79 container client-container: +STEP: delete the pod +Oct 27 14:58:46.877: INFO: Waiting for pod downwardapi-volume-53d800d6-7892-41fa-9ff0-cf1769db5a79 to disappear +Oct 27 14:58:46.881: INFO: Pod downwardapi-volume-53d800d6-7892-41fa-9ff0-cf1769db5a79 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:58:46.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-2391" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":228,"skipped":3861,"failed":0} +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl cluster-info + should check if Kubernetes control plane services is included in cluster-info [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:58:46.895: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-3394 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should check if Kubernetes control plane services is included in cluster-info [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: validating cluster-info +Oct 27 14:58:47.047: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-3394 cluster-info' +Oct 27 14:58:47.118: INFO: stderr: "" +Oct 27 14:58:47.118: INFO: stdout: "\x1b[0;32mKubernetes control plane\x1b[0m is running at \x1b[0;33mhttps://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:58:47.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-3394" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info [Conformance]","total":346,"completed":229,"skipped":3883,"failed":0} +S +------------------------------ +[sig-node] Secrets + should be consumable via the environment [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:58:47.129: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-9423 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable via the environment [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating secret secrets-9423/secret-test-487c1c5e-e921-4317-b0c1-e50edb74f0e2 +STEP: Creating a pod to test consume secrets +Oct 27 14:58:47.293: INFO: Waiting up to 5m0s for pod "pod-configmaps-5ebb0e3e-9d49-4149-9b92-74422f1e813c" in namespace "secrets-9423" to be "Succeeded or Failed" +Oct 27 14:58:47.297: INFO: Pod "pod-configmaps-5ebb0e3e-9d49-4149-9b92-74422f1e813c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.268989ms +Oct 27 14:58:49.303: INFO: Pod "pod-configmaps-5ebb0e3e-9d49-4149-9b92-74422f1e813c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010105773s +STEP: Saw pod success +Oct 27 14:58:49.303: INFO: Pod "pod-configmaps-5ebb0e3e-9d49-4149-9b92-74422f1e813c" satisfied condition "Succeeded or Failed" +Oct 27 14:58:49.308: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod pod-configmaps-5ebb0e3e-9d49-4149-9b92-74422f1e813c container env-test: +STEP: delete the pod +Oct 27 14:58:49.328: INFO: Waiting for pod pod-configmaps-5ebb0e3e-9d49-4149-9b92-74422f1e813c to disappear +Oct 27 14:58:49.332: INFO: Pod pod-configmaps-5ebb0e3e-9d49-4149-9b92-74422f1e813c no longer exists +[AfterEach] [sig-node] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:58:49.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-9423" for this suite. +•{"msg":"PASSED [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":346,"completed":230,"skipped":3884,"failed":0} +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-auth] ServiceAccounts + should run through the lifecycle of a ServiceAccount [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:58:49.345: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename svcaccounts +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in svcaccounts-2721 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should run through the lifecycle of a ServiceAccount [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a ServiceAccount +STEP: watching for the ServiceAccount to be added +STEP: patching the ServiceAccount +STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) +STEP: deleting the ServiceAccount +[AfterEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:58:49.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svcaccounts-2721" for this suite. +•{"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":346,"completed":231,"skipped":3906,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-scheduling] SchedulerPredicates [Serial] + validates resource limits of pods that are allowed to run [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:58:49.529: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename sched-pred +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-pred-7816 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 +Oct 27 14:58:49.737: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready +Oct 27 14:58:49.747: INFO: Waiting for terminating namespaces to be deleted... +Oct 27 14:58:49.760: INFO: +Logging pods the apiserver thinks is on node izgw81stpxs0bun38i01tfz before test +Oct 27 14:58:49.771: INFO: addons-nginx-ingress-controller-59fb958d58-lftrg from kube-system started at 2021-10-27 14:12:03 +0000 UTC (1 container statuses recorded) +Oct 27 14:58:49.771: INFO: Container nginx-ingress-controller ready: true, restart count 0 +Oct 27 14:58:49.771: INFO: addons-nginx-ingress-nginx-ingress-k8s-backend-56d9d84c8c-kbm9x from kube-system started at 2021-10-27 14:12:03 +0000 UTC (1 container statuses recorded) +Oct 27 14:58:49.771: INFO: Container nginx-ingress-nginx-ingress-k8s-backend ready: true, restart count 0 +Oct 27 14:58:49.771: INFO: apiserver-proxy-k22hx from kube-system started at 2021-10-27 13:52:34 +0000 UTC (2 container statuses recorded) +Oct 27 14:58:49.771: INFO: Container proxy ready: true, restart count 0 +Oct 27 14:58:49.771: INFO: Container sidecar ready: true, restart count 0 +Oct 27 14:58:49.771: INFO: calico-kube-controllers-56bcbfb5c5-dr6cw from kube-system started at 2021-10-27 13:52:34 +0000 UTC (1 container statuses recorded) +Oct 27 14:58:49.771: INFO: Container calico-kube-controllers ready: true, restart count 0 +Oct 27 14:58:49.771: INFO: calico-node-bn6rh from kube-system started at 2021-10-27 13:55:51 +0000 UTC (1 container statuses recorded) +Oct 27 14:58:49.771: INFO: Container calico-node ready: true, restart count 0 +Oct 27 14:58:49.771: INFO: calico-node-vertical-autoscaler-785b5f968-t4xfd from kube-system started at 2021-10-27 13:53:35 +0000 UTC (1 container statuses recorded) +Oct 27 14:58:49.771: INFO: Container autoscaler ready: true, restart count 0 +Oct 27 14:58:49.771: INFO: calico-typha-deploy-546b97d4b5-4h5cp from kube-system started at 2021-10-27 13:52:34 +0000 UTC (1 container statuses recorded) +Oct 27 14:58:49.771: INFO: Container calico-typha ready: true, restart count 0 +Oct 27 14:58:49.771: INFO: calico-typha-horizontal-autoscaler-5b58bb446c-sfqph from kube-system started at 2021-10-27 13:53:35 +0000 UTC (1 container statuses recorded) +Oct 27 14:58:49.771: INFO: Container autoscaler ready: true, restart count 0 +Oct 27 14:58:49.771: INFO: calico-typha-vertical-autoscaler-5c9655cddd-9fp9m from kube-system started at 2021-10-27 13:53:35 +0000 UTC (1 container statuses recorded) +Oct 27 14:58:49.771: INFO: Container autoscaler ready: true, restart count 0 +Oct 27 14:58:49.771: INFO: coredns-74d494ccd9-b4xr9 from kube-system started at 2021-10-27 14:12:03 +0000 UTC (1 container statuses recorded) +Oct 27 14:58:49.771: INFO: Container coredns ready: true, restart count 0 +Oct 27 14:58:49.771: INFO: coredns-74d494ccd9-tk5m9 from kube-system started at 2021-10-27 13:53:35 +0000 UTC (1 container statuses recorded) +Oct 27 14:58:49.771: INFO: Container coredns ready: true, restart count 0 +Oct 27 14:58:49.771: INFO: csi-disk-plugin-alicloud-zkfgk from kube-system started at 2021-10-27 13:52:34 +0000 UTC (3 container statuses recorded) +Oct 27 14:58:49.771: INFO: Container csi-diskplugin ready: true, restart count 0 +Oct 27 14:58:49.771: INFO: Container csi-liveness-probe ready: true, restart count 0 +Oct 27 14:58:49.771: INFO: Container driver-registrar ready: true, restart count 0 +Oct 27 14:58:49.771: INFO: kube-proxy-x6l7r from kube-system started at 2021-10-27 13:55:43 +0000 UTC (2 container statuses recorded) +Oct 27 14:58:49.771: INFO: Container conntrack-fix ready: true, restart count 0 +Oct 27 14:58:49.771: INFO: Container kube-proxy ready: true, restart count 0 +Oct 27 14:58:49.771: INFO: metrics-server-5d4664d665-hnljs from kube-system started at 2021-10-27 14:12:03 +0000 UTC (1 container statuses recorded) +Oct 27 14:58:49.771: INFO: Container metrics-server ready: true, restart count 0 +Oct 27 14:58:49.771: INFO: node-exporter-dh57q from kube-system started at 2021-10-27 13:52:34 +0000 UTC (1 container statuses recorded) +Oct 27 14:58:49.771: INFO: Container node-exporter ready: true, restart count 0 +Oct 27 14:58:49.771: INFO: node-problem-detector-wm6mk from kube-system started at 2021-10-27 14:19:42 +0000 UTC (1 container statuses recorded) +Oct 27 14:58:49.771: INFO: Container node-problem-detector ready: true, restart count 0 +Oct 27 14:58:49.771: INFO: vpn-shoot-78f675c9df-gzflt from kube-system started at 2021-10-27 14:12:03 +0000 UTC (1 container statuses recorded) +Oct 27 14:58:49.771: INFO: Container vpn-shoot ready: true, restart count 0 +Oct 27 14:58:49.771: INFO: dashboard-metrics-scraper-7ccbfc448f-4l6g7 from kubernetes-dashboard started at 2021-10-27 13:53:35 +0000 UTC (1 container statuses recorded) +Oct 27 14:58:49.771: INFO: Container dashboard-metrics-scraper ready: true, restart count 0 +Oct 27 14:58:49.771: INFO: kubernetes-dashboard-6cc9c75584-c47x8 from kubernetes-dashboard started at 2021-10-27 13:53:35 +0000 UTC (1 container statuses recorded) +Oct 27 14:58:49.771: INFO: Container kubernetes-dashboard ready: true, restart count 3 +Oct 27 14:58:49.771: INFO: +Logging pods the apiserver thinks is on node izgw89f23rpcwrl79tpgp1z before test +Oct 27 14:58:49.779: INFO: apiserver-proxy-vbdr6 from kube-system started at 2021-10-27 13:52:48 +0000 UTC (2 container statuses recorded) +Oct 27 14:58:49.779: INFO: Container proxy ready: true, restart count 0 +Oct 27 14:58:49.779: INFO: Container sidecar ready: true, restart count 0 +Oct 27 14:58:49.779: INFO: blackbox-exporter-65c549b94c-tkdlz from kube-system started at 2021-10-27 13:59:42 +0000 UTC (1 container statuses recorded) +Oct 27 14:58:49.779: INFO: Container blackbox-exporter ready: true, restart count 0 +Oct 27 14:58:49.779: INFO: calico-node-fxz56 from kube-system started at 2021-10-27 13:55:41 +0000 UTC (1 container statuses recorded) +Oct 27 14:58:49.779: INFO: Container calico-node ready: true, restart count 0 +Oct 27 14:58:49.779: INFO: csi-disk-plugin-alicloud-8kdpb from kube-system started at 2021-10-27 13:52:48 +0000 UTC (3 container statuses recorded) +Oct 27 14:58:49.779: INFO: Container csi-diskplugin ready: true, restart count 0 +Oct 27 14:58:49.779: INFO: Container csi-liveness-probe ready: true, restart count 0 +Oct 27 14:58:49.779: INFO: Container driver-registrar ready: true, restart count 0 +Oct 27 14:58:49.779: INFO: kube-proxy-2s7tx from kube-system started at 2021-10-27 13:55:44 +0000 UTC (2 container statuses recorded) +Oct 27 14:58:49.779: INFO: Container conntrack-fix ready: true, restart count 0 +Oct 27 14:58:49.779: INFO: Container kube-proxy ready: true, restart count 0 +Oct 27 14:58:49.780: INFO: node-exporter-zqsss from kube-system started at 2021-10-27 13:52:48 +0000 UTC (1 container statuses recorded) +Oct 27 14:58:49.780: INFO: Container node-exporter ready: true, restart count 0 +Oct 27 14:58:49.780: INFO: node-problem-detector-tddcd from kube-system started at 2021-10-27 14:19:43 +0000 UTC (1 container statuses recorded) +Oct 27 14:58:49.780: INFO: Container node-problem-detector ready: true, restart count 0 +[It] validates resource limits of pods that are allowed to run [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: verifying the node has the label node izgw81stpxs0bun38i01tfz +STEP: verifying the node has the label node izgw89f23rpcwrl79tpgp1z +Oct 27 14:58:49.881: INFO: Pod addons-nginx-ingress-controller-59fb958d58-lftrg requesting resource cpu=100m on Node izgw81stpxs0bun38i01tfz +Oct 27 14:58:49.882: INFO: Pod addons-nginx-ingress-nginx-ingress-k8s-backend-56d9d84c8c-kbm9x requesting resource cpu=0m on Node izgw81stpxs0bun38i01tfz +Oct 27 14:58:49.882: INFO: Pod apiserver-proxy-k22hx requesting resource cpu=40m on Node izgw81stpxs0bun38i01tfz +Oct 27 14:58:49.882: INFO: Pod apiserver-proxy-vbdr6 requesting resource cpu=40m on Node izgw89f23rpcwrl79tpgp1z +Oct 27 14:58:49.882: INFO: Pod blackbox-exporter-65c549b94c-tkdlz requesting resource cpu=11m on Node izgw89f23rpcwrl79tpgp1z +Oct 27 14:58:49.882: INFO: Pod calico-kube-controllers-56bcbfb5c5-dr6cw requesting resource cpu=10m on Node izgw81stpxs0bun38i01tfz +Oct 27 14:58:49.882: INFO: Pod calico-node-bn6rh requesting resource cpu=250m on Node izgw81stpxs0bun38i01tfz +Oct 27 14:58:49.882: INFO: Pod calico-node-fxz56 requesting resource cpu=250m on Node izgw89f23rpcwrl79tpgp1z +Oct 27 14:58:49.882: INFO: Pod calico-node-vertical-autoscaler-785b5f968-t4xfd requesting resource cpu=10m on Node izgw81stpxs0bun38i01tfz +Oct 27 14:58:49.882: INFO: Pod calico-typha-deploy-546b97d4b5-4h5cp requesting resource cpu=200m on Node izgw81stpxs0bun38i01tfz +Oct 27 14:58:49.882: INFO: Pod calico-typha-horizontal-autoscaler-5b58bb446c-sfqph requesting resource cpu=10m on Node izgw81stpxs0bun38i01tfz +Oct 27 14:58:49.882: INFO: Pod calico-typha-vertical-autoscaler-5c9655cddd-9fp9m requesting resource cpu=10m on Node izgw81stpxs0bun38i01tfz +Oct 27 14:58:49.882: INFO: Pod coredns-74d494ccd9-b4xr9 requesting resource cpu=50m on Node izgw81stpxs0bun38i01tfz +Oct 27 14:58:49.882: INFO: Pod coredns-74d494ccd9-tk5m9 requesting resource cpu=50m on Node izgw81stpxs0bun38i01tfz +Oct 27 14:58:49.882: INFO: Pod csi-disk-plugin-alicloud-8kdpb requesting resource cpu=40m on Node izgw89f23rpcwrl79tpgp1z +Oct 27 14:58:49.882: INFO: Pod csi-disk-plugin-alicloud-zkfgk requesting resource cpu=40m on Node izgw81stpxs0bun38i01tfz +Oct 27 14:58:49.882: INFO: Pod kube-proxy-2s7tx requesting resource cpu=34m on Node izgw89f23rpcwrl79tpgp1z +Oct 27 14:58:49.882: INFO: Pod kube-proxy-x6l7r requesting resource cpu=34m on Node izgw81stpxs0bun38i01tfz +Oct 27 14:58:49.882: INFO: Pod metrics-server-5d4664d665-hnljs requesting resource cpu=50m on Node izgw81stpxs0bun38i01tfz +Oct 27 14:58:49.882: INFO: Pod node-exporter-dh57q requesting resource cpu=50m on Node izgw81stpxs0bun38i01tfz +Oct 27 14:58:49.882: INFO: Pod node-exporter-zqsss requesting resource cpu=50m on Node izgw89f23rpcwrl79tpgp1z +Oct 27 14:58:49.882: INFO: Pod node-problem-detector-tddcd requesting resource cpu=11m on Node izgw89f23rpcwrl79tpgp1z +Oct 27 14:58:49.882: INFO: Pod node-problem-detector-wm6mk requesting resource cpu=11m on Node izgw81stpxs0bun38i01tfz +Oct 27 14:58:49.882: INFO: Pod vpn-shoot-78f675c9df-gzflt requesting resource cpu=11m on Node izgw81stpxs0bun38i01tfz +Oct 27 14:58:49.882: INFO: Pod dashboard-metrics-scraper-7ccbfc448f-4l6g7 requesting resource cpu=0m on Node izgw81stpxs0bun38i01tfz +Oct 27 14:58:49.882: INFO: Pod kubernetes-dashboard-6cc9c75584-c47x8 requesting resource cpu=50m on Node izgw81stpxs0bun38i01tfz +STEP: Starting Pods to consume most of the cluster CPU. +Oct 27 14:58:49.882: INFO: Creating a pod which consumes cpu=660m on Node izgw81stpxs0bun38i01tfz +Oct 27 14:58:49.893: INFO: Creating a pod which consumes cpu=1038m on Node izgw89f23rpcwrl79tpgp1z +STEP: Creating another pod that requires unavailable amount of CPU. +STEP: Considering event: +Type = [Normal], Name = [filler-pod-aa7d394e-487f-4bb0-a8b0-63d3179a3004.16b1eb2fb80fe334], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7816/filler-pod-aa7d394e-487f-4bb0-a8b0-63d3179a3004 to izgw81stpxs0bun38i01tfz] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-aa7d394e-487f-4bb0-a8b0-63d3179a3004.16b1eb2fd9e05de7], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.5" already present on machine] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-aa7d394e-487f-4bb0-a8b0-63d3179a3004.16b1eb2fdef28c42], Reason = [Created], Message = [Created container filler-pod-aa7d394e-487f-4bb0-a8b0-63d3179a3004] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-aa7d394e-487f-4bb0-a8b0-63d3179a3004.16b1eb2fe3a3b590], Reason = [Started], Message = [Started container filler-pod-aa7d394e-487f-4bb0-a8b0-63d3179a3004] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-be1a5eea-27b3-4262-98a3-0d13d0592ba3.16b1eb2fb8a28138], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7816/filler-pod-be1a5eea-27b3-4262-98a3-0d13d0592ba3 to izgw89f23rpcwrl79tpgp1z] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-be1a5eea-27b3-4262-98a3-0d13d0592ba3.16b1eb2fd9a410f8], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.5" already present on machine] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-be1a5eea-27b3-4262-98a3-0d13d0592ba3.16b1eb2fe00de542], Reason = [Created], Message = [Created container filler-pod-be1a5eea-27b3-4262-98a3-0d13d0592ba3] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-be1a5eea-27b3-4262-98a3-0d13d0592ba3.16b1eb2fe47d33db], Reason = [Started], Message = [Started container filler-pod-be1a5eea-27b3-4262-98a3-0d13d0592ba3] +STEP: Considering event: +Type = [Warning], Name = [additional-pod.16b1eb30319f83f9], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.] +STEP: removing the label node off the node izgw81stpxs0bun38i01tfz +STEP: verifying the node doesn't have the label node +STEP: removing the label node off the node izgw89f23rpcwrl79tpgp1z +STEP: verifying the node doesn't have the label node +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:58:52.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-pred-7816" for this suite. +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 +•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":346,"completed":232,"skipped":3919,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-network] EndpointSliceMirroring + should mirror a custom Endpoints resource through create update and delete [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] EndpointSliceMirroring + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:58:52.991: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename endpointslicemirroring +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in endpointslicemirroring-2954 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] EndpointSliceMirroring + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslicemirroring.go:39 +[It] should mirror a custom Endpoints resource through create update and delete [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: mirroring a new custom Endpoint +Oct 27 14:58:53.154: INFO: Waiting for at least 1 EndpointSlice to exist, got 0 +STEP: mirroring an update to a custom Endpoint +STEP: mirroring deletion of a custom Endpoint +[AfterEach] [sig-network] EndpointSliceMirroring + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:58:55.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "endpointslicemirroring-2954" for this suite. +•{"msg":"PASSED [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","total":346,"completed":233,"skipped":3933,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-instrumentation] Events API + should ensure that an event can be fetched, patched, deleted, and listed [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-instrumentation] Events API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:58:55.193: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename events +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in events-9672 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-instrumentation] Events API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 +[It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a test event +STEP: listing events in all namespaces +STEP: listing events in test namespace +STEP: listing events with field selection filtering on source +STEP: listing events with field selection filtering on reportingController +STEP: getting the test event +STEP: patching the test event +STEP: getting the test event +STEP: updating the test event +STEP: getting the test event +STEP: deleting the test event +STEP: listing events in all namespaces +STEP: listing events in test namespace +[AfterEach] [sig-instrumentation] Events API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:58:55.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "events-9672" for this suite. +•{"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":346,"completed":234,"skipped":3969,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-auth] ServiceAccounts + ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:58:55.419: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename svcaccounts +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in svcaccounts-3346 +STEP: Waiting for a default service account to be provisioned in namespace +[It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:58:55.587: INFO: created pod +Oct 27 14:58:55.587: INFO: Waiting up to 5m0s for pod "oidc-discovery-validator" in namespace "svcaccounts-3346" to be "Succeeded or Failed" +Oct 27 14:58:55.591: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.148187ms +Oct 27 14:58:57.597: INFO: Pod "oidc-discovery-validator": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009802996s +STEP: Saw pod success +Oct 27 14:58:57.597: INFO: Pod "oidc-discovery-validator" satisfied condition "Succeeded or Failed" +Oct 27 14:59:27.599: INFO: polling logs +Oct 27 14:59:27.611: INFO: Pod logs: +2021/10/27 14:58:56 OK: Got token +2021/10/27 14:58:56 validating with in-cluster discovery +2021/10/27 14:58:56 OK: got issuer https://api.tmanu-jzf.it.internal.staging.k8s.ondemand.com +2021/10/27 14:58:56 Full, not-validated claims: +openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://api.tmanu-jzf.it.internal.staging.k8s.ondemand.com", Subject:"system:serviceaccount:svcaccounts-3346:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1635347335, NotBefore:1635346735, IssuedAt:1635346735, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-3346", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"d418aff0-b70c-4477-9bc7-25597827f0a5"}}} +2021/10/27 14:58:56 OK: Constructed OIDC provider for issuer https://api.tmanu-jzf.it.internal.staging.k8s.ondemand.com +2021/10/27 14:58:56 OK: Validated signature on JWT +2021/10/27 14:58:56 OK: Got valid claims from token! +2021/10/27 14:58:56 Full, validated claims: +&openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://api.tmanu-jzf.it.internal.staging.k8s.ondemand.com", Subject:"system:serviceaccount:svcaccounts-3346:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1635347335, NotBefore:1635346735, IssuedAt:1635346735, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-3346", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"d418aff0-b70c-4477-9bc7-25597827f0a5"}}} + +Oct 27 14:59:27.611: INFO: completed pod +[AfterEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:59:27.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svcaccounts-3346" for this suite. +•{"msg":"PASSED [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","total":346,"completed":235,"skipped":4021,"failed":0} +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicaSet + should adopt matching pods on creation and release no longer matching pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:59:27.631: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename replicaset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replicaset-7734 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should adopt matching pods on creation and release no longer matching pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Given a Pod with a 'name' label pod-adoption-release is created +Oct 27 14:59:28.299: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:59:30.306: INFO: The status of Pod pod-adoption-release is Running (Ready = true) +STEP: When a replicaset with a matching selector is created +STEP: Then the orphan pod is adopted +STEP: When the matched label of one of its pods change +Oct 27 14:59:31.333: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 +STEP: Then the pod is released +[AfterEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:59:32.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replicaset-7734" for this suite. +•{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":346,"completed":236,"skipped":4042,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should run and stop simple daemon [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:59:32.370: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename daemonsets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in daemonsets-7526 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:142 +[It] should run and stop simple daemon [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating simple DaemonSet "daemon-set" +STEP: Check that daemon pods launch on every node of the cluster. +Oct 27 14:59:32.552: INFO: Number of nodes with available pods: 0 +Oct 27 14:59:32.552: INFO: Node izgw81stpxs0bun38i01tfz is running more than one daemon pod +Oct 27 14:59:33.566: INFO: Number of nodes with available pods: 1 +Oct 27 14:59:33.566: INFO: Node izgw89f23rpcwrl79tpgp1z is running more than one daemon pod +Oct 27 14:59:34.567: INFO: Number of nodes with available pods: 2 +Oct 27 14:59:34.567: INFO: Number of running nodes: 2, number of available pods: 2 +STEP: Stop a daemon pod, check that the daemon pod is revived. +Oct 27 14:59:34.592: INFO: Number of nodes with available pods: 1 +Oct 27 14:59:34.592: INFO: Node izgw81stpxs0bun38i01tfz is running more than one daemon pod +Oct 27 14:59:35.606: INFO: Number of nodes with available pods: 1 +Oct 27 14:59:35.606: INFO: Node izgw81stpxs0bun38i01tfz is running more than one daemon pod +Oct 27 14:59:36.604: INFO: Number of nodes with available pods: 1 +Oct 27 14:59:36.604: INFO: Node izgw81stpxs0bun38i01tfz is running more than one daemon pod +Oct 27 14:59:37.607: INFO: Number of nodes with available pods: 2 +Oct 27 14:59:37.607: INFO: Number of running nodes: 2, number of available pods: 2 +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:108 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7526, will wait for the garbage collector to delete the pods +Oct 27 14:59:37.675: INFO: Deleting DaemonSet.extensions daemon-set took: 6.373525ms +Oct 27 14:59:37.775: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.318657ms +Oct 27 14:59:40.580: INFO: Number of nodes with available pods: 0 +Oct 27 14:59:40.581: INFO: Number of running nodes: 0, number of available pods: 0 +Oct 27 14:59:40.589: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"30536"},"items":null} + +Oct 27 14:59:40.593: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"30536"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:59:40.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-7526" for this suite. +•{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":346,"completed":237,"skipped":4085,"failed":0} +SSSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + Should recreate evicted statefulset [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:59:40.622: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename statefulset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-5910 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:92 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:107 +STEP: Creating service test in namespace statefulset-5910 +[It] Should recreate evicted statefulset [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Looking for a node to schedule stateful set and pod +STEP: Creating pod with conflicting port in namespace statefulset-5910 +STEP: Waiting until pod test-pod will start running in namespace statefulset-5910 +STEP: Creating statefulset with conflicting port in namespace statefulset-5910 +STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-5910 +Oct 27 14:59:42.828: INFO: Observed stateful pod in namespace: statefulset-5910, name: ss-0, uid: 2645c16b-c2af-4486-a062-b32808f54ae3, status phase: Pending. Waiting for statefulset controller to delete. +Oct 27 14:59:42.844: INFO: Observed stateful pod in namespace: statefulset-5910, name: ss-0, uid: 2645c16b-c2af-4486-a062-b32808f54ae3, status phase: Failed. Waiting for statefulset controller to delete. +Oct 27 14:59:42.849: INFO: Observed stateful pod in namespace: statefulset-5910, name: ss-0, uid: 2645c16b-c2af-4486-a062-b32808f54ae3, status phase: Failed. Waiting for statefulset controller to delete. +Oct 27 14:59:42.850: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-5910 +STEP: Removing pod with conflicting port in namespace statefulset-5910 +STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-5910 and will be in running state +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118 +Oct 27 14:59:44.877: INFO: Deleting all statefulset in ns statefulset-5910 +Oct 27 14:59:44.881: INFO: Scaling statefulset ss to 0 +Oct 27 14:59:54.903: INFO: Waiting for statefulset status.replicas updated to 0 +Oct 27 14:59:54.907: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:59:54.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-5910" for this suite. +•{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":346,"completed":238,"skipped":4090,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Variable Expansion + should allow composing env vars into new env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:59:54.938: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename var-expansion +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in var-expansion-9557 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should allow composing env vars into new env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test env composition +Oct 27 14:59:55.108: INFO: Waiting up to 5m0s for pod "var-expansion-acfde89a-e1d4-4037-94fb-dfd8ce7e580e" in namespace "var-expansion-9557" to be "Succeeded or Failed" +Oct 27 14:59:55.112: INFO: Pod "var-expansion-acfde89a-e1d4-4037-94fb-dfd8ce7e580e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.371643ms +Oct 27 14:59:57.119: INFO: Pod "var-expansion-acfde89a-e1d4-4037-94fb-dfd8ce7e580e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.01067786s +STEP: Saw pod success +Oct 27 14:59:57.119: INFO: Pod "var-expansion-acfde89a-e1d4-4037-94fb-dfd8ce7e580e" satisfied condition "Succeeded or Failed" +Oct 27 14:59:57.123: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod var-expansion-acfde89a-e1d4-4037-94fb-dfd8ce7e580e container dapi-container: +STEP: delete the pod +Oct 27 14:59:57.206: INFO: Waiting for pod var-expansion-acfde89a-e1d4-4037-94fb-dfd8ce7e580e to disappear +Oct 27 14:59:57.210: INFO: Pod var-expansion-acfde89a-e1d4-4037-94fb-dfd8ce7e580e no longer exists +[AfterEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:59:57.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-9557" for this suite. +•{"msg":"PASSED [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":346,"completed":239,"skipped":4115,"failed":0} +SSSSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:59:57.225: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename gc +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-6643 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the deployment +STEP: Wait for the Deployment to create new ReplicaSet +STEP: delete the deployment +STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs +STEP: Gathering metrics +Oct 27 14:59:58.043: INFO: For apiserver_request_total: +For apiserver_request_latency_seconds: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +W1027 14:59:58.043396 5703 metrics_grabber.go:151] Can't find kube-controller-manager pod. Grabbing metrics from kube-controller-manager is disabled. +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:59:58.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-6643" for this suite. +•{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":346,"completed":240,"skipped":4124,"failed":0} +SSSSS +------------------------------ +[sig-apps] DisruptionController + should block an eviction until the PDB is updated to allow it [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:59:58.054: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename disruption +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in disruption-8249 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 +[It] should block an eviction until the PDB is updated to allow it [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pdb that targets all three pods in a test replica set +STEP: Waiting for the pdb to be processed +STEP: First trying to evict a pod which shouldn't be evictable +STEP: Waiting for all pods to be running +Oct 27 15:00:00.227: INFO: pods: 0 < 3 +STEP: locating a running pod +STEP: Updating the pdb to allow a pod to be evicted +STEP: Waiting for the pdb to be processed +STEP: Trying to evict the same pod we tried earlier which should now be evictable +STEP: Waiting for all pods to be running +STEP: Waiting for the pdb to observed all healthy pods +STEP: Patching the pdb to disallow a pod to be evicted +STEP: Waiting for the pdb to be processed +STEP: Waiting for all pods to be running +Oct 27 15:00:02.301: INFO: running pods: 2 < 3 +STEP: locating a running pod +STEP: Deleting the pdb to allow a pod to be evicted +STEP: Waiting for the pdb to be deleted +STEP: Trying to evict the same pod we tried earlier which should now be evictable +STEP: Waiting for all pods to be running +[AfterEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:00:04.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "disruption-8249" for this suite. +•{"msg":"PASSED [sig-apps] DisruptionController should block an eviction until the PDB is updated to allow it [Conformance]","total":346,"completed":241,"skipped":4129,"failed":0} +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should delete pods created by rc when not orphaning [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:00:04.354: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename gc +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-8065 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should delete pods created by rc when not orphaning [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the rc +STEP: delete the rc +STEP: wait for all pods to be garbage collected +STEP: Gathering metrics +Oct 27 15:00:14.545: INFO: For apiserver_request_total: +For apiserver_request_latency_seconds: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +W1027 15:00:14.545077 5703 metrics_grabber.go:151] Can't find kube-controller-manager pod. Grabbing metrics from kube-controller-manager is disabled. +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:00:14.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-8065" for this suite. +•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":346,"completed":242,"skipped":4149,"failed":0} +SSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should update annotations on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:00:14.556: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-2855 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should update annotations on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating the pod +Oct 27 15:00:14.722: INFO: The status of Pod annotationupdate9871a724-58c7-45da-8394-9e75d43bcc1f is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:00:16.729: INFO: The status of Pod annotationupdate9871a724-58c7-45da-8394-9e75d43bcc1f is Running (Ready = true) +Oct 27 15:00:17.302: INFO: Successfully updated pod "annotationupdate9871a724-58c7-45da-8394-9e75d43bcc1f" +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:00:21.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-2855" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":346,"completed":243,"skipped":4157,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:00:21.349: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-727 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 27 15:00:22.065: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +Oct 27 15:00:24.081: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943622, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943622, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943622, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943622, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 27 15:00:27.098: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API +STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API +STEP: Creating a dummy validating-webhook-configuration object +STEP: Deleting the validating-webhook-configuration, which should be possible to remove +STEP: Creating a dummy mutating-webhook-configuration object +STEP: Deleting the mutating-webhook-configuration, which should be possible to remove +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:00:27.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-727" for this suite. +STEP: Destroying namespace "webhook-727-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":346,"completed":244,"skipped":4172,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should be able to change the type from ExternalName to ClusterIP [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:00:27.375: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-4819 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should be able to change the type from ExternalName to ClusterIP [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a service externalname-service with the type=ExternalName in namespace services-4819 +STEP: changing the ExternalName service to type=ClusterIP +STEP: creating replication controller externalname-service in namespace services-4819 +I1027 15:00:27.549191 5703 runners.go:190] Created replication controller with name: externalname-service, namespace: services-4819, replica count: 2 +Oct 27 15:00:30.600: INFO: Creating new exec pod +I1027 15:00:30.600521 5703 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Oct 27 15:00:33.625: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-4819 exec execpodfrhwk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' +Oct 27 15:00:33.962: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" +Oct 27 15:00:33.962: INFO: stdout: "" +Oct 27 15:00:34.963: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-4819 exec execpodfrhwk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' +Oct 27 15:00:35.224: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" +Oct 27 15:00:35.224: INFO: stdout: "" +Oct 27 15:00:35.963: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-4819 exec execpodfrhwk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' +Oct 27 15:00:36.239: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" +Oct 27 15:00:36.239: INFO: stdout: "" +Oct 27 15:00:36.963: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-4819 exec execpodfrhwk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' +Oct 27 15:00:37.264: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" +Oct 27 15:00:37.264: INFO: stdout: "externalname-service-hmlsh" +Oct 27 15:00:37.264: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-4819 exec execpodfrhwk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.26.249.207 80' +Oct 27 15:00:37.554: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.26.249.207 80\nConnection to 172.26.249.207 80 port [tcp/http] succeeded!\n" +Oct 27 15:00:37.554: INFO: stdout: "" +Oct 27 15:00:38.555: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-4819 exec execpodfrhwk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.26.249.207 80' +Oct 27 15:00:38.807: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.26.249.207 80\nConnection to 172.26.249.207 80 port [tcp/http] succeeded!\n" +Oct 27 15:00:38.807: INFO: stdout: "externalname-service-l4zwp" +Oct 27 15:00:38.807: INFO: Cleaning up the ExternalName to ClusterIP test service +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:00:38.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-4819" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":346,"completed":245,"skipped":4196,"failed":0} +SSS +------------------------------ +[sig-cli] Kubectl client Kubectl patch + should add annotations for pods in rc [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:00:38.834: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-3974 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should add annotations for pods in rc [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating Agnhost RC +Oct 27 15:00:38.983: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-3974 create -f -' +Oct 27 15:00:39.159: INFO: stderr: "" +Oct 27 15:00:39.159: INFO: stdout: "replicationcontroller/agnhost-primary created\n" +STEP: Waiting for Agnhost primary to start. +Oct 27 15:00:40.166: INFO: Selector matched 1 pods for map[app:agnhost] +Oct 27 15:00:40.166: INFO: Found 0 / 1 +Oct 27 15:00:41.168: INFO: Selector matched 1 pods for map[app:agnhost] +Oct 27 15:00:41.168: INFO: Found 1 / 1 +Oct 27 15:00:41.168: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 +STEP: patching all pods +Oct 27 15:00:41.172: INFO: Selector matched 1 pods for map[app:agnhost] +Oct 27 15:00:41.172: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +Oct 27 15:00:41.172: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-3974 patch pod agnhost-primary-jm69c -p {"metadata":{"annotations":{"x":"y"}}}' +Oct 27 15:00:41.253: INFO: stderr: "" +Oct 27 15:00:41.253: INFO: stdout: "pod/agnhost-primary-jm69c patched\n" +STEP: checking annotations +Oct 27 15:00:41.258: INFO: Selector matched 1 pods for map[app:agnhost] +Oct 27 15:00:41.258: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:00:41.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-3974" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":346,"completed":246,"skipped":4199,"failed":0} +SSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:00:41.272: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-4178 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 27 15:00:41.436: INFO: Waiting up to 5m0s for pod "downwardapi-volume-89bf9992-f713-4da3-8f0e-574eee00d2f0" in namespace "downward-api-4178" to be "Succeeded or Failed" +Oct 27 15:00:41.440: INFO: Pod "downwardapi-volume-89bf9992-f713-4da3-8f0e-574eee00d2f0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.411391ms +Oct 27 15:00:43.446: INFO: Pod "downwardapi-volume-89bf9992-f713-4da3-8f0e-574eee00d2f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009957717s +STEP: Saw pod success +Oct 27 15:00:43.446: INFO: Pod "downwardapi-volume-89bf9992-f713-4da3-8f0e-574eee00d2f0" satisfied condition "Succeeded or Failed" +Oct 27 15:00:43.451: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod downwardapi-volume-89bf9992-f713-4da3-8f0e-574eee00d2f0 container client-container: +STEP: delete the pod +Oct 27 15:00:43.508: INFO: Waiting for pod downwardapi-volume-89bf9992-f713-4da3-8f0e-574eee00d2f0 to disappear +Oct 27 15:00:43.512: INFO: Pod downwardapi-volume-89bf9992-f713-4da3-8f0e-574eee00d2f0 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:00:43.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-4178" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":346,"completed":247,"skipped":4208,"failed":0} +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:00:43.526: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-9287 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating projection with secret that has name projected-secret-test-3eb38eb6-7f40-406a-a1a3-0538f0a30151 +STEP: Creating a pod to test consume secrets +Oct 27 15:00:43.865: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a35858e7-5003-4125-b5c0-b7423883bd24" in namespace "projected-9287" to be "Succeeded or Failed" +Oct 27 15:00:43.871: INFO: Pod "pod-projected-secrets-a35858e7-5003-4125-b5c0-b7423883bd24": Phase="Pending", Reason="", readiness=false. Elapsed: 6.169662ms +Oct 27 15:00:45.877: INFO: Pod "pod-projected-secrets-a35858e7-5003-4125-b5c0-b7423883bd24": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012230452s +STEP: Saw pod success +Oct 27 15:00:45.877: INFO: Pod "pod-projected-secrets-a35858e7-5003-4125-b5c0-b7423883bd24" satisfied condition "Succeeded or Failed" +Oct 27 15:00:45.881: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod pod-projected-secrets-a35858e7-5003-4125-b5c0-b7423883bd24 container projected-secret-volume-test: +STEP: delete the pod +Oct 27 15:00:45.901: INFO: Waiting for pod pod-projected-secrets-a35858e7-5003-4125-b5c0-b7423883bd24 to disappear +Oct 27 15:00:45.905: INFO: Pod pod-projected-secrets-a35858e7-5003-4125-b5c0-b7423883bd24 no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:00:45.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-9287" for this suite. +•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":248,"skipped":4229,"failed":0} +SSSSS +------------------------------ +[sig-network] Networking Granular Checks: Pods + should function for intra-pod communication: http [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Networking + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:00:45.918: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename pod-network-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pod-network-test-5439 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should function for intra-pod communication: http [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Performing setup for networking test in namespace pod-network-test-5439 +STEP: creating a selector +STEP: Creating the service pods in kubernetes +Oct 27 15:00:46.063: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +Oct 27 15:00:46.100: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:00:48.105: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 15:00:50.107: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 15:00:52.106: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 15:00:54.106: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 15:00:56.106: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 15:00:58.107: INFO: The status of Pod netserver-0 is Running (Ready = true) +Oct 27 15:00:58.116: INFO: The status of Pod netserver-1 is Running (Ready = true) +STEP: Creating test pods +Oct 27 15:01:00.148: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 +Oct 27 15:01:00.148: INFO: Breadth first check of 172.16.0.70 on host 10.250.8.34... +Oct 27 15:01:00.152: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://172.16.1.26:9080/dial?request=hostname&protocol=http&host=172.16.0.70&port=8083&tries=1'] Namespace:pod-network-test-5439 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 15:01:00.152: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 15:01:00.352: INFO: Waiting for responses: map[] +Oct 27 15:01:00.352: INFO: reached 172.16.0.70 after 0/1 tries +Oct 27 15:01:00.352: INFO: Breadth first check of 172.16.1.25 on host 10.250.8.35... +Oct 27 15:01:00.357: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://172.16.1.26:9080/dial?request=hostname&protocol=http&host=172.16.1.25&port=8083&tries=1'] Namespace:pod-network-test-5439 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 15:01:00.357: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 15:01:00.601: INFO: Waiting for responses: map[] +Oct 27 15:01:00.601: INFO: reached 172.16.1.25 after 0/1 tries +Oct 27 15:01:00.601: INFO: Going to retry 0 out of 2 pods.... +[AfterEach] [sig-network] Networking + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:01:00.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pod-network-test-5439" for this suite. +•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":346,"completed":249,"skipped":4234,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:01:00.616: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-9004 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0666 on node default medium +Oct 27 15:01:00.777: INFO: Waiting up to 5m0s for pod "pod-4e0b502b-84e0-4d1a-9c7d-09b3eae7edf3" in namespace "emptydir-9004" to be "Succeeded or Failed" +Oct 27 15:01:00.782: INFO: Pod "pod-4e0b502b-84e0-4d1a-9c7d-09b3eae7edf3": Phase="Pending", Reason="", readiness=false. Elapsed: 5.08335ms +Oct 27 15:01:02.789: INFO: Pod "pod-4e0b502b-84e0-4d1a-9c7d-09b3eae7edf3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012423487s +STEP: Saw pod success +Oct 27 15:01:02.789: INFO: Pod "pod-4e0b502b-84e0-4d1a-9c7d-09b3eae7edf3" satisfied condition "Succeeded or Failed" +Oct 27 15:01:02.794: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod pod-4e0b502b-84e0-4d1a-9c7d-09b3eae7edf3 container test-container: +STEP: delete the pod +Oct 27 15:01:02.855: INFO: Waiting for pod pod-4e0b502b-84e0-4d1a-9c7d-09b3eae7edf3 to disappear +Oct 27 15:01:02.860: INFO: Pod pod-4e0b502b-84e0-4d1a-9c7d-09b3eae7edf3 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:01:02.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-9004" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":250,"skipped":4246,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook + should execute prestop exec hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Container Lifecycle Hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:01:02.874: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-lifecycle-hook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-lifecycle-hook-1827 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] when create a pod with lifecycle hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 +STEP: create the container to handle the HTTPGet hook request. +Oct 27 15:01:03.047: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:01:05.052: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) +[It] should execute prestop exec hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the pod with lifecycle hook +Oct 27 15:01:05.073: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:01:07.079: INFO: The status of Pod pod-with-prestop-exec-hook is Running (Ready = true) +STEP: delete the pod with lifecycle hook +Oct 27 15:01:07.090: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Oct 27 15:01:07.095: INFO: Pod pod-with-prestop-exec-hook still exists +Oct 27 15:01:09.095: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Oct 27 15:01:09.100: INFO: Pod pod-with-prestop-exec-hook still exists +Oct 27 15:01:11.096: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Oct 27 15:01:11.101: INFO: Pod pod-with-prestop-exec-hook no longer exists +STEP: check prestop hook +[AfterEach] [sig-node] Container Lifecycle Hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:01:11.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-lifecycle-hook-1827" for this suite. +•{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":346,"completed":251,"skipped":4259,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-apps] CronJob + should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:01:11.168: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename cronjob +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in cronjob-341 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a ForbidConcurrent cronjob +STEP: Ensuring a job is scheduled +STEP: Ensuring exactly one is scheduled +STEP: Ensuring exactly one running job exists by listing jobs explicitly +STEP: Ensuring no more jobs are scheduled +STEP: Removing cronjob +[AfterEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:07:01.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "cronjob-341" for this suite. + +• [SLOW TEST:350.203 seconds] +[sig-apps] CronJob +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","total":346,"completed":252,"skipped":4274,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch + watch on custom resource definition objects [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:07:01.372: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename crd-watch +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-watch-5931 +STEP: Waiting for a default service account to be provisioned in namespace +[It] watch on custom resource definition objects [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:07:01.526: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Creating first CR +Oct 27 15:07:04.099: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-10-27T15:07:04Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-10-27T15:07:04Z]] name:name1 resourceVersion:33197 uid:edf7b336-7c77-4554-9649-63133d0f1947] num:map[num1:9223372036854775807 num2:1000000]]} +STEP: Creating second CR +Oct 27 15:07:14.106: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-10-27T15:07:14Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-10-27T15:07:14Z]] name:name2 resourceVersion:33246 uid:0cc0413d-5e14-4a60-9087-fc93fbce7fe6] num:map[num1:9223372036854775807 num2:1000000]]} +STEP: Modifying first CR +Oct 27 15:07:24.115: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-10-27T15:07:04Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-10-27T15:07:24Z]] name:name1 resourceVersion:33290 uid:edf7b336-7c77-4554-9649-63133d0f1947] num:map[num1:9223372036854775807 num2:1000000]]} +STEP: Modifying second CR +Oct 27 15:07:34.123: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-10-27T15:07:14Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-10-27T15:07:34Z]] name:name2 resourceVersion:33333 uid:0cc0413d-5e14-4a60-9087-fc93fbce7fe6] num:map[num1:9223372036854775807 num2:1000000]]} +STEP: Deleting first CR +Oct 27 15:07:44.131: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-10-27T15:07:04Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-10-27T15:07:24Z]] name:name1 resourceVersion:33399 uid:edf7b336-7c77-4554-9649-63133d0f1947] num:map[num1:9223372036854775807 num2:1000000]]} +STEP: Deleting second CR +Oct 27 15:07:54.138: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-10-27T15:07:14Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-10-27T15:07:34Z]] name:name2 resourceVersion:33443 uid:0cc0413d-5e14-4a60-9087-fc93fbce7fe6] num:map[num1:9223372036854775807 num2:1000000]]} +[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:08:04.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-watch-5931" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":346,"completed":253,"skipped":4360,"failed":0} +SS +------------------------------ +[sig-cli] Kubectl client Kubectl diff + should check if kubectl diff finds a difference for Deployments [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:08:04.669: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-5501 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should check if kubectl diff finds a difference for Deployments [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create deployment with httpd image +Oct 27 15:08:04.820: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-5501 create -f -' +Oct 27 15:08:05.843: INFO: stderr: "" +Oct 27 15:08:05.843: INFO: stdout: "deployment.apps/httpd-deployment created\n" +STEP: verify diff finds difference between live and declared image +Oct 27 15:08:05.843: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-5501 diff -f -' +Oct 27 15:08:06.100: INFO: rc: 1 +Oct 27 15:08:06.100: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-5501 delete -f -' +Oct 27 15:08:06.201: INFO: stderr: "" +Oct 27 15:08:06.201: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:08:06.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-5501" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":346,"completed":254,"skipped":4362,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-node] Docker Containers + should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Docker Containers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:08:06.214: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename containers +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in containers-2075 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test override command +Oct 27 15:08:06.374: INFO: Waiting up to 5m0s for pod "client-containers-49f52dfd-d46f-4945-9c25-3b97c3404aa0" in namespace "containers-2075" to be "Succeeded or Failed" +Oct 27 15:08:06.378: INFO: Pod "client-containers-49f52dfd-d46f-4945-9c25-3b97c3404aa0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.41273ms +Oct 27 15:08:08.384: INFO: Pod "client-containers-49f52dfd-d46f-4945-9c25-3b97c3404aa0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010468035s +STEP: Saw pod success +Oct 27 15:08:08.384: INFO: Pod "client-containers-49f52dfd-d46f-4945-9c25-3b97c3404aa0" satisfied condition "Succeeded or Failed" +Oct 27 15:08:08.388: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod client-containers-49f52dfd-d46f-4945-9c25-3b97c3404aa0 container agnhost-container: +STEP: delete the pod +Oct 27 15:08:08.411: INFO: Waiting for pod client-containers-49f52dfd-d46f-4945-9c25-3b97c3404aa0 to disappear +Oct 27 15:08:08.415: INFO: Pod client-containers-49f52dfd-d46f-4945-9c25-3b97c3404aa0 no longer exists +[AfterEach] [sig-node] Docker Containers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:08:08.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "containers-2075" for this suite. +•{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":346,"completed":255,"skipped":4373,"failed":0} +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Variable Expansion + should fail substituting values in a volume subpath with backticks [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:08:08.429: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename var-expansion +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in var-expansion-4569 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should fail substituting values in a volume subpath with backticks [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:08:10.599: INFO: Deleting pod "var-expansion-56cf6d8f-7bde-4869-9d65-e0cf232ce1f7" in namespace "var-expansion-4569" +Oct 27 15:08:10.605: INFO: Wait up to 5m0s for pod "var-expansion-56cf6d8f-7bde-4869-9d65-e0cf232ce1f7" to be fully deleted +[AfterEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:08:14.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-4569" for this suite. +•{"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]","total":346,"completed":256,"skipped":4394,"failed":0} +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-scheduling] SchedulerPredicates [Serial] + validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:08:14.628: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename sched-pred +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-pred-1835 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 +Oct 27 15:08:14.776: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready +Oct 27 15:08:14.787: INFO: Waiting for terminating namespaces to be deleted... +Oct 27 15:08:14.791: INFO: +Logging pods the apiserver thinks is on node izgw81stpxs0bun38i01tfz before test +Oct 27 15:08:14.802: INFO: addons-nginx-ingress-controller-59fb958d58-lftrg from kube-system started at 2021-10-27 14:12:03 +0000 UTC (1 container statuses recorded) +Oct 27 15:08:14.802: INFO: Container nginx-ingress-controller ready: true, restart count 0 +Oct 27 15:08:14.802: INFO: addons-nginx-ingress-nginx-ingress-k8s-backend-56d9d84c8c-kbm9x from kube-system started at 2021-10-27 14:12:03 +0000 UTC (1 container statuses recorded) +Oct 27 15:08:14.802: INFO: Container nginx-ingress-nginx-ingress-k8s-backend ready: true, restart count 0 +Oct 27 15:08:14.802: INFO: apiserver-proxy-k22hx from kube-system started at 2021-10-27 13:52:34 +0000 UTC (2 container statuses recorded) +Oct 27 15:08:14.802: INFO: Container proxy ready: true, restart count 0 +Oct 27 15:08:14.802: INFO: Container sidecar ready: true, restart count 0 +Oct 27 15:08:14.802: INFO: calico-kube-controllers-56bcbfb5c5-dr6cw from kube-system started at 2021-10-27 13:52:34 +0000 UTC (1 container statuses recorded) +Oct 27 15:08:14.802: INFO: Container calico-kube-controllers ready: true, restart count 0 +Oct 27 15:08:14.802: INFO: calico-node-bn6rh from kube-system started at 2021-10-27 13:55:51 +0000 UTC (1 container statuses recorded) +Oct 27 15:08:14.802: INFO: Container calico-node ready: true, restart count 0 +Oct 27 15:08:14.802: INFO: calico-node-vertical-autoscaler-785b5f968-t4xfd from kube-system started at 2021-10-27 13:53:35 +0000 UTC (1 container statuses recorded) +Oct 27 15:08:14.802: INFO: Container autoscaler ready: true, restart count 0 +Oct 27 15:08:14.802: INFO: calico-typha-deploy-546b97d4b5-4h5cp from kube-system started at 2021-10-27 13:52:34 +0000 UTC (1 container statuses recorded) +Oct 27 15:08:14.802: INFO: Container calico-typha ready: true, restart count 0 +Oct 27 15:08:14.802: INFO: calico-typha-horizontal-autoscaler-5b58bb446c-sfqph from kube-system started at 2021-10-27 13:53:35 +0000 UTC (1 container statuses recorded) +Oct 27 15:08:14.802: INFO: Container autoscaler ready: true, restart count 0 +Oct 27 15:08:14.802: INFO: calico-typha-vertical-autoscaler-5c9655cddd-9fp9m from kube-system started at 2021-10-27 13:53:35 +0000 UTC (1 container statuses recorded) +Oct 27 15:08:14.802: INFO: Container autoscaler ready: true, restart count 0 +Oct 27 15:08:14.802: INFO: coredns-74d494ccd9-b4xr9 from kube-system started at 2021-10-27 14:12:03 +0000 UTC (1 container statuses recorded) +Oct 27 15:08:14.802: INFO: Container coredns ready: true, restart count 0 +Oct 27 15:08:14.802: INFO: coredns-74d494ccd9-tk5m9 from kube-system started at 2021-10-27 13:53:35 +0000 UTC (1 container statuses recorded) +Oct 27 15:08:14.802: INFO: Container coredns ready: true, restart count 0 +Oct 27 15:08:14.802: INFO: csi-disk-plugin-alicloud-zkfgk from kube-system started at 2021-10-27 13:52:34 +0000 UTC (3 container statuses recorded) +Oct 27 15:08:14.802: INFO: Container csi-diskplugin ready: true, restart count 0 +Oct 27 15:08:14.802: INFO: Container csi-liveness-probe ready: true, restart count 0 +Oct 27 15:08:14.802: INFO: Container driver-registrar ready: true, restart count 0 +Oct 27 15:08:14.802: INFO: kube-proxy-x6l7r from kube-system started at 2021-10-27 13:55:43 +0000 UTC (2 container statuses recorded) +Oct 27 15:08:14.802: INFO: Container conntrack-fix ready: true, restart count 0 +Oct 27 15:08:14.802: INFO: Container kube-proxy ready: true, restart count 0 +Oct 27 15:08:14.802: INFO: metrics-server-5d4664d665-hnljs from kube-system started at 2021-10-27 14:12:03 +0000 UTC (1 container statuses recorded) +Oct 27 15:08:14.802: INFO: Container metrics-server ready: true, restart count 0 +Oct 27 15:08:14.802: INFO: node-exporter-dh57q from kube-system started at 2021-10-27 13:52:34 +0000 UTC (1 container statuses recorded) +Oct 27 15:08:14.802: INFO: Container node-exporter ready: true, restart count 0 +Oct 27 15:08:14.802: INFO: node-problem-detector-wm6mk from kube-system started at 2021-10-27 14:19:42 +0000 UTC (1 container statuses recorded) +Oct 27 15:08:14.802: INFO: Container node-problem-detector ready: true, restart count 0 +Oct 27 15:08:14.802: INFO: vpn-shoot-78f675c9df-gzflt from kube-system started at 2021-10-27 14:12:03 +0000 UTC (1 container statuses recorded) +Oct 27 15:08:14.802: INFO: Container vpn-shoot ready: true, restart count 0 +Oct 27 15:08:14.802: INFO: dashboard-metrics-scraper-7ccbfc448f-4l6g7 from kubernetes-dashboard started at 2021-10-27 13:53:35 +0000 UTC (1 container statuses recorded) +Oct 27 15:08:14.802: INFO: Container dashboard-metrics-scraper ready: true, restart count 0 +Oct 27 15:08:14.802: INFO: kubernetes-dashboard-6cc9c75584-c47x8 from kubernetes-dashboard started at 2021-10-27 13:53:35 +0000 UTC (1 container statuses recorded) +Oct 27 15:08:14.802: INFO: Container kubernetes-dashboard ready: true, restart count 3 +Oct 27 15:08:14.802: INFO: +Logging pods the apiserver thinks is on node izgw89f23rpcwrl79tpgp1z before test +Oct 27 15:08:14.811: INFO: apiserver-proxy-vbdr6 from kube-system started at 2021-10-27 13:52:48 +0000 UTC (2 container statuses recorded) +Oct 27 15:08:14.811: INFO: Container proxy ready: true, restart count 0 +Oct 27 15:08:14.811: INFO: Container sidecar ready: true, restart count 0 +Oct 27 15:08:14.811: INFO: blackbox-exporter-65c549b94c-tkdlz from kube-system started at 2021-10-27 13:59:42 +0000 UTC (1 container statuses recorded) +Oct 27 15:08:14.811: INFO: Container blackbox-exporter ready: true, restart count 0 +Oct 27 15:08:14.811: INFO: calico-node-fxz56 from kube-system started at 2021-10-27 13:55:41 +0000 UTC (1 container statuses recorded) +Oct 27 15:08:14.811: INFO: Container calico-node ready: true, restart count 0 +Oct 27 15:08:14.811: INFO: csi-disk-plugin-alicloud-8kdpb from kube-system started at 2021-10-27 13:52:48 +0000 UTC (3 container statuses recorded) +Oct 27 15:08:14.811: INFO: Container csi-diskplugin ready: true, restart count 0 +Oct 27 15:08:14.811: INFO: Container csi-liveness-probe ready: true, restart count 0 +Oct 27 15:08:14.811: INFO: Container driver-registrar ready: true, restart count 0 +Oct 27 15:08:14.811: INFO: kube-proxy-2s7tx from kube-system started at 2021-10-27 13:55:44 +0000 UTC (2 container statuses recorded) +Oct 27 15:08:14.811: INFO: Container conntrack-fix ready: true, restart count 0 +Oct 27 15:08:14.811: INFO: Container kube-proxy ready: true, restart count 0 +Oct 27 15:08:14.811: INFO: node-exporter-zqsss from kube-system started at 2021-10-27 13:52:48 +0000 UTC (1 container statuses recorded) +Oct 27 15:08:14.811: INFO: Container node-exporter ready: true, restart count 0 +Oct 27 15:08:14.811: INFO: node-problem-detector-tddcd from kube-system started at 2021-10-27 14:19:43 +0000 UTC (1 container statuses recorded) +Oct 27 15:08:14.811: INFO: Container node-problem-detector ready: true, restart count 0 +[It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Trying to launch a pod without a label to get a node which can launch it. +STEP: Explicitly delete pod here to free the resource it takes. +STEP: Trying to apply a random label on the found node. +STEP: verifying the node has the label kubernetes.io/e2e-41f46dbf-fd92-4fe9-8e21-434df23d3a44 95 +STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled +STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 10.250.8.35 on the node which pod4 resides and expect not scheduled +STEP: removing the label kubernetes.io/e2e-41f46dbf-fd92-4fe9-8e21-434df23d3a44 off the node izgw89f23rpcwrl79tpgp1z +STEP: verifying the node doesn't have the label kubernetes.io/e2e-41f46dbf-fd92-4fe9-8e21-434df23d3a44 +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:13:18.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-pred-1835" for this suite. +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 + +• [SLOW TEST:304.319 seconds] +[sig-scheduling] SchedulerPredicates [Serial] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 + validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":346,"completed":257,"skipped":4414,"failed":0} +S +------------------------------ +[sig-apps] Job + should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Job + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:13:18.948: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename job +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in job-3422 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a job +STEP: Ensuring job reaches completions +[AfterEach] [sig-apps] Job + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:13:25.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "job-3422" for this suite. +•{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":346,"completed":258,"skipped":4415,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-apps] CronJob + should replace jobs when ReplaceConcurrent [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:13:25.126: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename cronjob +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in cronjob-7682 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should replace jobs when ReplaceConcurrent [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a ReplaceConcurrent cronjob +STEP: Ensuring a job is scheduled +STEP: Ensuring exactly one is scheduled +STEP: Ensuring exactly one running job exists by listing jobs explicitly +STEP: Ensuring the job is replaced with a new one +STEP: Removing cronjob +[AfterEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:15:01.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "cronjob-7682" for this suite. +•{"msg":"PASSED [sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","total":346,"completed":259,"skipped":4428,"failed":0} +SSSS +------------------------------ +[sig-node] Docker Containers + should be able to override the image's default command and arguments [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Docker Containers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:15:01.325: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename containers +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in containers-7169 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test override all +Oct 27 15:15:01.485: INFO: Waiting up to 5m0s for pod "client-containers-38c0ae2d-0513-4a53-9969-ee3620527857" in namespace "containers-7169" to be "Succeeded or Failed" +Oct 27 15:15:01.490: INFO: Pod "client-containers-38c0ae2d-0513-4a53-9969-ee3620527857": Phase="Pending", Reason="", readiness=false. Elapsed: 4.45452ms +Oct 27 15:15:03.497: INFO: Pod "client-containers-38c0ae2d-0513-4a53-9969-ee3620527857": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011153765s +STEP: Saw pod success +Oct 27 15:15:03.497: INFO: Pod "client-containers-38c0ae2d-0513-4a53-9969-ee3620527857" satisfied condition "Succeeded or Failed" +Oct 27 15:15:03.501: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod client-containers-38c0ae2d-0513-4a53-9969-ee3620527857 container agnhost-container: +STEP: delete the pod +Oct 27 15:15:03.524: INFO: Waiting for pod client-containers-38c0ae2d-0513-4a53-9969-ee3620527857 to disappear +Oct 27 15:15:03.529: INFO: Pod client-containers-38c0ae2d-0513-4a53-9969-ee3620527857 no longer exists +[AfterEach] [sig-node] Docker Containers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:15:03.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "containers-7169" for this suite. +•{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":346,"completed":260,"skipped":4432,"failed":0} +S +------------------------------ +[sig-scheduling] SchedulerPredicates [Serial] + validates that NodeSelector is respected if matching [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:15:03.543: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename sched-pred +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-pred-3301 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 +Oct 27 15:15:03.693: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready +Oct 27 15:15:03.704: INFO: Waiting for terminating namespaces to be deleted... +Oct 27 15:15:03.708: INFO: +Logging pods the apiserver thinks is on node izgw81stpxs0bun38i01tfz before test +Oct 27 15:15:03.719: INFO: addons-nginx-ingress-controller-59fb958d58-lftrg from kube-system started at 2021-10-27 14:12:03 +0000 UTC (1 container statuses recorded) +Oct 27 15:15:03.719: INFO: Container nginx-ingress-controller ready: true, restart count 0 +Oct 27 15:15:03.719: INFO: addons-nginx-ingress-nginx-ingress-k8s-backend-56d9d84c8c-kbm9x from kube-system started at 2021-10-27 14:12:03 +0000 UTC (1 container statuses recorded) +Oct 27 15:15:03.719: INFO: Container nginx-ingress-nginx-ingress-k8s-backend ready: true, restart count 0 +Oct 27 15:15:03.719: INFO: apiserver-proxy-k22hx from kube-system started at 2021-10-27 13:52:34 +0000 UTC (2 container statuses recorded) +Oct 27 15:15:03.719: INFO: Container proxy ready: true, restart count 0 +Oct 27 15:15:03.719: INFO: Container sidecar ready: true, restart count 0 +Oct 27 15:15:03.719: INFO: calico-kube-controllers-56bcbfb5c5-dr6cw from kube-system started at 2021-10-27 13:52:34 +0000 UTC (1 container statuses recorded) +Oct 27 15:15:03.719: INFO: Container calico-kube-controllers ready: true, restart count 0 +Oct 27 15:15:03.719: INFO: calico-node-bn6rh from kube-system started at 2021-10-27 13:55:51 +0000 UTC (1 container statuses recorded) +Oct 27 15:15:03.719: INFO: Container calico-node ready: true, restart count 0 +Oct 27 15:15:03.720: INFO: calico-node-vertical-autoscaler-785b5f968-t4xfd from kube-system started at 2021-10-27 13:53:35 +0000 UTC (1 container statuses recorded) +Oct 27 15:15:03.720: INFO: Container autoscaler ready: true, restart count 0 +Oct 27 15:15:03.720: INFO: calico-typha-deploy-546b97d4b5-4h5cp from kube-system started at 2021-10-27 13:52:34 +0000 UTC (1 container statuses recorded) +Oct 27 15:15:03.720: INFO: Container calico-typha ready: true, restart count 0 +Oct 27 15:15:03.720: INFO: calico-typha-horizontal-autoscaler-5b58bb446c-sfqph from kube-system started at 2021-10-27 13:53:35 +0000 UTC (1 container statuses recorded) +Oct 27 15:15:03.720: INFO: Container autoscaler ready: true, restart count 0 +Oct 27 15:15:03.720: INFO: calico-typha-vertical-autoscaler-5c9655cddd-9fp9m from kube-system started at 2021-10-27 13:53:35 +0000 UTC (1 container statuses recorded) +Oct 27 15:15:03.720: INFO: Container autoscaler ready: true, restart count 0 +Oct 27 15:15:03.720: INFO: coredns-74d494ccd9-b4xr9 from kube-system started at 2021-10-27 14:12:03 +0000 UTC (1 container statuses recorded) +Oct 27 15:15:03.720: INFO: Container coredns ready: true, restart count 0 +Oct 27 15:15:03.720: INFO: coredns-74d494ccd9-tk5m9 from kube-system started at 2021-10-27 13:53:35 +0000 UTC (1 container statuses recorded) +Oct 27 15:15:03.720: INFO: Container coredns ready: true, restart count 0 +Oct 27 15:15:03.720: INFO: csi-disk-plugin-alicloud-zkfgk from kube-system started at 2021-10-27 13:52:34 +0000 UTC (3 container statuses recorded) +Oct 27 15:15:03.720: INFO: Container csi-diskplugin ready: true, restart count 0 +Oct 27 15:15:03.720: INFO: Container csi-liveness-probe ready: true, restart count 0 +Oct 27 15:15:03.720: INFO: Container driver-registrar ready: true, restart count 0 +Oct 27 15:15:03.720: INFO: kube-proxy-x6l7r from kube-system started at 2021-10-27 13:55:43 +0000 UTC (2 container statuses recorded) +Oct 27 15:15:03.720: INFO: Container conntrack-fix ready: true, restart count 0 +Oct 27 15:15:03.720: INFO: Container kube-proxy ready: true, restart count 0 +Oct 27 15:15:03.720: INFO: metrics-server-5d4664d665-hnljs from kube-system started at 2021-10-27 14:12:03 +0000 UTC (1 container statuses recorded) +Oct 27 15:15:03.720: INFO: Container metrics-server ready: true, restart count 0 +Oct 27 15:15:03.720: INFO: node-exporter-dh57q from kube-system started at 2021-10-27 13:52:34 +0000 UTC (1 container statuses recorded) +Oct 27 15:15:03.720: INFO: Container node-exporter ready: true, restart count 0 +Oct 27 15:15:03.720: INFO: node-problem-detector-wm6mk from kube-system started at 2021-10-27 14:19:42 +0000 UTC (1 container statuses recorded) +Oct 27 15:15:03.720: INFO: Container node-problem-detector ready: true, restart count 0 +Oct 27 15:15:03.720: INFO: vpn-shoot-78f675c9df-gzflt from kube-system started at 2021-10-27 14:12:03 +0000 UTC (1 container statuses recorded) +Oct 27 15:15:03.720: INFO: Container vpn-shoot ready: true, restart count 0 +Oct 27 15:15:03.720: INFO: dashboard-metrics-scraper-7ccbfc448f-4l6g7 from kubernetes-dashboard started at 2021-10-27 13:53:35 +0000 UTC (1 container statuses recorded) +Oct 27 15:15:03.720: INFO: Container dashboard-metrics-scraper ready: true, restart count 0 +Oct 27 15:15:03.720: INFO: kubernetes-dashboard-6cc9c75584-c47x8 from kubernetes-dashboard started at 2021-10-27 13:53:35 +0000 UTC (1 container statuses recorded) +Oct 27 15:15:03.720: INFO: Container kubernetes-dashboard ready: true, restart count 3 +Oct 27 15:15:03.720: INFO: +Logging pods the apiserver thinks is on node izgw89f23rpcwrl79tpgp1z before test +Oct 27 15:15:03.729: INFO: replace-27255794--1-65lnd from cronjob-7682 started at 2021-10-27 15:14:00 +0000 UTC (1 container statuses recorded) +Oct 27 15:15:03.729: INFO: Container c ready: true, restart count 0 +Oct 27 15:15:03.729: INFO: replace-27255795--1-m6r7r from cronjob-7682 started at 2021-10-27 15:15:00 +0000 UTC (1 container statuses recorded) +Oct 27 15:15:03.729: INFO: Container c ready: true, restart count 0 +Oct 27 15:15:03.729: INFO: apiserver-proxy-vbdr6 from kube-system started at 2021-10-27 13:52:48 +0000 UTC (2 container statuses recorded) +Oct 27 15:15:03.729: INFO: Container proxy ready: true, restart count 0 +Oct 27 15:15:03.729: INFO: Container sidecar ready: true, restart count 0 +Oct 27 15:15:03.729: INFO: blackbox-exporter-65c549b94c-tkdlz from kube-system started at 2021-10-27 13:59:42 +0000 UTC (1 container statuses recorded) +Oct 27 15:15:03.729: INFO: Container blackbox-exporter ready: true, restart count 0 +Oct 27 15:15:03.729: INFO: calico-node-fxz56 from kube-system started at 2021-10-27 13:55:41 +0000 UTC (1 container statuses recorded) +Oct 27 15:15:03.729: INFO: Container calico-node ready: true, restart count 0 +Oct 27 15:15:03.729: INFO: csi-disk-plugin-alicloud-8kdpb from kube-system started at 2021-10-27 13:52:48 +0000 UTC (3 container statuses recorded) +Oct 27 15:15:03.729: INFO: Container csi-diskplugin ready: true, restart count 0 +Oct 27 15:15:03.729: INFO: Container csi-liveness-probe ready: true, restart count 0 +Oct 27 15:15:03.729: INFO: Container driver-registrar ready: true, restart count 0 +Oct 27 15:15:03.729: INFO: kube-proxy-2s7tx from kube-system started at 2021-10-27 13:55:44 +0000 UTC (2 container statuses recorded) +Oct 27 15:15:03.729: INFO: Container conntrack-fix ready: true, restart count 0 +Oct 27 15:15:03.729: INFO: Container kube-proxy ready: true, restart count 0 +Oct 27 15:15:03.729: INFO: node-exporter-zqsss from kube-system started at 2021-10-27 13:52:48 +0000 UTC (1 container statuses recorded) +Oct 27 15:15:03.729: INFO: Container node-exporter ready: true, restart count 0 +Oct 27 15:15:03.729: INFO: node-problem-detector-tddcd from kube-system started at 2021-10-27 14:19:43 +0000 UTC (1 container statuses recorded) +Oct 27 15:15:03.729: INFO: Container node-problem-detector ready: true, restart count 0 +[It] validates that NodeSelector is respected if matching [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Trying to launch a pod without a label to get a node which can launch it. +STEP: Explicitly delete pod here to free the resource it takes. +STEP: Trying to apply a random label on the found node. +STEP: verifying the node has the label kubernetes.io/e2e-b73300a9-c781-4b7e-b5b1-fbf33531ccd5 42 +STEP: Trying to relaunch the pod, now with labels. +STEP: removing the label kubernetes.io/e2e-b73300a9-c781-4b7e-b5b1-fbf33531ccd5 off the node izgw89f23rpcwrl79tpgp1z +STEP: verifying the node doesn't have the label kubernetes.io/e2e-b73300a9-c781-4b7e-b5b1-fbf33531ccd5 +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:15:07.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-pred-3301" for this suite. +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 +•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":346,"completed":261,"skipped":4433,"failed":0} +SSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should update annotations on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:15:07.835: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-6389 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should update annotations on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating the pod +Oct 27 15:15:08.003: INFO: The status of Pod annotationupdate4549a612-89dd-4cc1-8d25-c7f7c799a810 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:15:10.009: INFO: The status of Pod annotationupdate4549a612-89dd-4cc1-8d25-c7f7c799a810 is Running (Ready = true) +Oct 27 15:15:10.587: INFO: Successfully updated pod "annotationupdate4549a612-89dd-4cc1-8d25-c7f7c799a810" +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:15:14.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-6389" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":346,"completed":262,"skipped":4442,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Probing container + should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:15:14.634: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-probe +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-6139 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 +[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating pod busybox-23cb6172-a684-4836-8375-ff5860bf2580 in namespace container-probe-6139 +Oct 27 15:15:16.810: INFO: Started pod busybox-23cb6172-a684-4836-8375-ff5860bf2580 in namespace container-probe-6139 +STEP: checking the pod's current state and verifying that restartCount is present +Oct 27 15:15:16.815: INFO: Initial restart count of pod busybox-23cb6172-a684-4836-8375-ff5860bf2580 is 0 +STEP: deleting the pod +[AfterEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:19:17.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-6139" for this suite. +•{"msg":"PASSED [sig-node] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":346,"completed":263,"skipped":4474,"failed":0} +SSSSSS +------------------------------ +[sig-scheduling] SchedulerPreemption [Serial] + validates lower priority pod preemption by critical pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:19:17.592: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename sched-preemption +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-preemption-6274 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 +Oct 27 15:19:17.756: INFO: Waiting up to 1m0s for all nodes to be ready +Oct 27 15:20:17.808: INFO: Waiting for terminating namespaces to be deleted... +[It] validates lower priority pod preemption by critical pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Create pods that use 4/5 of node resources. +Oct 27 15:20:17.849: INFO: Created pod: pod0-0-sched-preemption-low-priority +Oct 27 15:20:17.859: INFO: Created pod: pod0-1-sched-preemption-medium-priority +Oct 27 15:20:17.880: INFO: Created pod: pod1-0-sched-preemption-medium-priority +Oct 27 15:20:17.890: INFO: Created pod: pod1-1-sched-preemption-medium-priority +STEP: Wait for pods to be scheduled. +STEP: Run a critical pod that use same resources as that of a lower priority pod +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:20:29.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-preemption-6274" for this suite. +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 +•{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":346,"completed":264,"skipped":4480,"failed":0} +SSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:20:30.037: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-4953 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0777 on tmpfs +Oct 27 15:20:30.200: INFO: Waiting up to 5m0s for pod "pod-404eb3bc-19b2-4dc3-9f87-2f8df6133529" in namespace "emptydir-4953" to be "Succeeded or Failed" +Oct 27 15:20:30.206: INFO: Pod "pod-404eb3bc-19b2-4dc3-9f87-2f8df6133529": Phase="Pending", Reason="", readiness=false. Elapsed: 5.454962ms +Oct 27 15:20:32.212: INFO: Pod "pod-404eb3bc-19b2-4dc3-9f87-2f8df6133529": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.01156853s +STEP: Saw pod success +Oct 27 15:20:32.212: INFO: Pod "pod-404eb3bc-19b2-4dc3-9f87-2f8df6133529" satisfied condition "Succeeded or Failed" +Oct 27 15:20:32.217: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod pod-404eb3bc-19b2-4dc3-9f87-2f8df6133529 container test-container: +STEP: delete the pod +Oct 27 15:20:32.241: INFO: Waiting for pod pod-404eb3bc-19b2-4dc3-9f87-2f8df6133529 to disappear +Oct 27 15:20:32.246: INFO: Pod pod-404eb3bc-19b2-4dc3-9f87-2f8df6133529 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:20:32.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-4953" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":265,"skipped":4486,"failed":0} +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + updates the published spec when one version gets renamed [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:20:32.259: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-225 +STEP: Waiting for a default service account to be provisioned in namespace +[It] updates the published spec when one version gets renamed [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: set up a multi version CRD +Oct 27 15:20:32.413: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: rename a version +STEP: check the new version name is served +STEP: check the old version name is removed +STEP: check the other version is not changed +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:20:52.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-225" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":346,"completed":266,"skipped":4506,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-network] EndpointSlice + should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:20:52.318: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename endpointslice +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in endpointslice-4639 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 +[It] should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:20:52.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "endpointslice-4639" for this suite. +•{"msg":"PASSED [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","total":346,"completed":267,"skipped":4520,"failed":0} +SSSSSSS +------------------------------ +[sig-node] Pods + should contain environment variables for services [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:20:52.518: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename pods +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-873 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:188 +[It] should contain environment variables for services [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:20:52.681: INFO: The status of Pod server-envvars-599ce900-4eed-4340-9b8c-2adf50776b30 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:20:54.687: INFO: The status of Pod server-envvars-599ce900-4eed-4340-9b8c-2adf50776b30 is Running (Ready = true) +Oct 27 15:20:54.710: INFO: Waiting up to 5m0s for pod "client-envvars-c3e4bb52-4fce-41f4-b2a9-8b96775d87a6" in namespace "pods-873" to be "Succeeded or Failed" +Oct 27 15:20:54.715: INFO: Pod "client-envvars-c3e4bb52-4fce-41f4-b2a9-8b96775d87a6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.174329ms +Oct 27 15:20:56.722: INFO: Pod "client-envvars-c3e4bb52-4fce-41f4-b2a9-8b96775d87a6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011017284s +STEP: Saw pod success +Oct 27 15:20:56.722: INFO: Pod "client-envvars-c3e4bb52-4fce-41f4-b2a9-8b96775d87a6" satisfied condition "Succeeded or Failed" +Oct 27 15:20:56.726: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod client-envvars-c3e4bb52-4fce-41f4-b2a9-8b96775d87a6 container env3cont: +STEP: delete the pod +Oct 27 15:20:56.766: INFO: Waiting for pod client-envvars-c3e4bb52-4fce-41f4-b2a9-8b96775d87a6 to disappear +Oct 27 15:20:56.770: INFO: Pod client-envvars-c3e4bb52-4fce-41f4-b2a9-8b96775d87a6 no longer exists +[AfterEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:20:56.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-873" for this suite. +•{"msg":"PASSED [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":346,"completed":268,"skipped":4527,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Probing container + should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:20:56.784: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-probe +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-7736 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 +[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating pod liveness-12813807-4a5d-4876-af43-812262d571d4 in namespace container-probe-7736 +Oct 27 15:20:58.981: INFO: Started pod liveness-12813807-4a5d-4876-af43-812262d571d4 in namespace container-probe-7736 +STEP: checking the pod's current state and verifying that restartCount is present +Oct 27 15:20:58.985: INFO: Initial restart count of pod liveness-12813807-4a5d-4876-af43-812262d571d4 is 0 +Oct 27 15:21:19.054: INFO: Restart count of pod container-probe-7736/liveness-12813807-4a5d-4876-af43-812262d571d4 is now 1 (20.068818836s elapsed) +STEP: deleting the pod +[AfterEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:21:19.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-7736" for this suite. +•{"msg":"PASSED [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":346,"completed":269,"skipped":4550,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:21:19.078: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-4010 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 27 15:21:19.244: INFO: Waiting up to 5m0s for pod "downwardapi-volume-970318cd-3877-47e5-9546-9ae1f00bd9a9" in namespace "downward-api-4010" to be "Succeeded or Failed" +Oct 27 15:21:19.248: INFO: Pod "downwardapi-volume-970318cd-3877-47e5-9546-9ae1f00bd9a9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.477094ms +Oct 27 15:21:21.255: INFO: Pod "downwardapi-volume-970318cd-3877-47e5-9546-9ae1f00bd9a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010958651s +STEP: Saw pod success +Oct 27 15:21:21.255: INFO: Pod "downwardapi-volume-970318cd-3877-47e5-9546-9ae1f00bd9a9" satisfied condition "Succeeded or Failed" +Oct 27 15:21:21.260: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod downwardapi-volume-970318cd-3877-47e5-9546-9ae1f00bd9a9 container client-container: +STEP: delete the pod +Oct 27 15:21:21.322: INFO: Waiting for pod downwardapi-volume-970318cd-3877-47e5-9546-9ae1f00bd9a9 to disappear +Oct 27 15:21:21.326: INFO: Pod downwardapi-volume-970318cd-3877-47e5-9546-9ae1f00bd9a9 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:21:21.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-4010" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":270,"skipped":4562,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Job + should adopt matching orphans and release non-matching pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Job + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:21:21.340: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename job +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in job-5959 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should adopt matching orphans and release non-matching pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a job +STEP: Ensuring active pods == parallelism +STEP: Orphaning one of the Job's Pods +Oct 27 15:21:24.023: INFO: Successfully updated pod "adopt-release--1-h6txx" +STEP: Checking that the Job readopts the Pod +Oct 27 15:21:24.023: INFO: Waiting up to 15m0s for pod "adopt-release--1-h6txx" in namespace "job-5959" to be "adopted" +Oct 27 15:21:24.027: INFO: Pod "adopt-release--1-h6txx": Phase="Running", Reason="", readiness=true. Elapsed: 4.391461ms +Oct 27 15:21:26.033: INFO: Pod "adopt-release--1-h6txx": Phase="Running", Reason="", readiness=true. Elapsed: 2.010226584s +Oct 27 15:21:26.033: INFO: Pod "adopt-release--1-h6txx" satisfied condition "adopted" +STEP: Removing the labels from the Job's Pod +Oct 27 15:21:26.546: INFO: Successfully updated pod "adopt-release--1-h6txx" +STEP: Checking that the Job releases the Pod +Oct 27 15:21:26.546: INFO: Waiting up to 15m0s for pod "adopt-release--1-h6txx" in namespace "job-5959" to be "released" +Oct 27 15:21:26.551: INFO: Pod "adopt-release--1-h6txx": Phase="Running", Reason="", readiness=true. Elapsed: 4.398288ms +Oct 27 15:21:28.557: INFO: Pod "adopt-release--1-h6txx": Phase="Running", Reason="", readiness=true. Elapsed: 2.011267035s +Oct 27 15:21:28.557: INFO: Pod "adopt-release--1-h6txx" satisfied condition "released" +[AfterEach] [sig-apps] Job + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:21:28.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "job-5959" for this suite. +•{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":346,"completed":271,"skipped":4646,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] DisruptionController + should create a PodDisruptionBudget [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:21:28.572: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename disruption +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in disruption-5838 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 +[It] should create a PodDisruptionBudget [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pdb +STEP: Waiting for the pdb to be processed +STEP: updating the pdb +STEP: Waiting for the pdb to be processed +STEP: patching the pdb +STEP: Waiting for the pdb to be processed +STEP: Waiting for the pdb to be deleted +[AfterEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:21:28.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "disruption-5838" for this suite. +•{"msg":"PASSED [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]","total":346,"completed":272,"skipped":4673,"failed":0} + +------------------------------ +[sig-node] Variable Expansion + should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:21:28.784: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename var-expansion +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in var-expansion-5160 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:21:30.956: INFO: Deleting pod "var-expansion-f99edbda-c4eb-4e34-9c4e-7c82dee00cf2" in namespace "var-expansion-5160" +Oct 27 15:21:30.962: INFO: Wait up to 5m0s for pod "var-expansion-f99edbda-c4eb-4e34-9c4e-7c82dee00cf2" to be fully deleted +[AfterEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:21:32.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-5160" for this suite. +•{"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]","total":346,"completed":273,"skipped":4673,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Security Context When creating a pod with privileged + should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:21:32.986: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename security-context-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-test-3117 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 +[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:21:33.146: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-7902491b-fe02-4424-b8f7-18a879854440" in namespace "security-context-test-3117" to be "Succeeded or Failed" +Oct 27 15:21:33.150: INFO: Pod "busybox-privileged-false-7902491b-fe02-4424-b8f7-18a879854440": Phase="Pending", Reason="", readiness=false. Elapsed: 4.300981ms +Oct 27 15:21:35.157: INFO: Pod "busybox-privileged-false-7902491b-fe02-4424-b8f7-18a879854440": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010725284s +Oct 27 15:21:35.157: INFO: Pod "busybox-privileged-false-7902491b-fe02-4424-b8f7-18a879854440" satisfied condition "Succeeded or Failed" +Oct 27 15:21:35.169: INFO: Got logs for pod "busybox-privileged-false-7902491b-fe02-4424-b8f7-18a879854440": "ip: RTNETLINK answers: Operation not permitted\n" +[AfterEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:21:35.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "security-context-test-3117" for this suite. +•{"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":274,"skipped":4704,"failed":0} +SSSSSS +------------------------------ +[sig-network] DNS + should support configurable pod DNS nameservers [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:21:35.183: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename dns +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-6134 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support configurable pod DNS nameservers [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... +Oct 27 15:21:35.344: INFO: Created pod &Pod{ObjectMeta:{test-dns-nameservers dns-6134 19654efd-0175-4a60-b067-13ba3ca41ec8 37902 0 2021-10-27 15:21:35 +0000 UTC map[] map[kubernetes.io/psp:e2e-test-privileged-psp] [] [] [{e2e.test Update v1 2021-10-27 15:21:35 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-qdkz7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmanu-jzf.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qdkz7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:21:35.348: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:21:37.355: INFO: The status of Pod test-dns-nameservers is Running (Ready = true) +STEP: Verifying customized DNS suffix list is configured on pod... +Oct 27 15:21:37.355: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-6134 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 15:21:37.355: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Verifying customized DNS server is configured on pod... +Oct 27 15:21:37.574: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-6134 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 15:21:37.574: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 15:21:37.825: INFO: Deleting pod test-dns-nameservers... +[AfterEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:21:37.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-6134" for this suite. +•{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":346,"completed":275,"skipped":4710,"failed":0} +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:21:37.847: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-5820 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating service in namespace services-5820 +Oct 27 15:21:38.009: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:21:40.015: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true) +Oct 27 15:21:40.020: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-5820 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' +Oct 27 15:21:40.662: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" +Oct 27 15:21:40.662: INFO: stdout: "iptables" +Oct 27 15:21:40.662: INFO: proxyMode: iptables +Oct 27 15:21:40.669: INFO: Waiting for pod kube-proxy-mode-detector to disappear +Oct 27 15:21:40.674: INFO: Pod kube-proxy-mode-detector no longer exists +STEP: creating service affinity-clusterip-timeout in namespace services-5820 +STEP: creating replication controller affinity-clusterip-timeout in namespace services-5820 +I1027 15:21:40.688670 5703 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-5820, replica count: 3 +I1027 15:21:43.739086 5703 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Oct 27 15:21:43.748: INFO: Creating new exec pod +Oct 27 15:21:46.768: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-5820 exec execpod-affinity589m5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80' +Oct 27 15:21:47.030: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-timeout 80\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\n" +Oct 27 15:21:47.030: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 15:21:47.030: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-5820 exec execpod-affinity589m5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.27.52.89 80' +Oct 27 15:21:47.285: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.27.52.89 80\nConnection to 172.27.52.89 80 port [tcp/http] succeeded!\n" +Oct 27 15:21:47.285: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 15:21:47.285: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-5820 exec execpod-affinity589m5 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.27.52.89:80/ ; done' +Oct 27 15:21:47.608: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.27.52.89:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.27.52.89:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.27.52.89:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.27.52.89:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.27.52.89:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.27.52.89:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.27.52.89:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.27.52.89:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.27.52.89:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.27.52.89:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.27.52.89:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.27.52.89:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.27.52.89:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.27.52.89:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.27.52.89:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.27.52.89:80/\n" +Oct 27 15:21:47.608: INFO: stdout: "\naffinity-clusterip-timeout-npnzh\naffinity-clusterip-timeout-npnzh\naffinity-clusterip-timeout-npnzh\naffinity-clusterip-timeout-npnzh\naffinity-clusterip-timeout-npnzh\naffinity-clusterip-timeout-npnzh\naffinity-clusterip-timeout-npnzh\naffinity-clusterip-timeout-npnzh\naffinity-clusterip-timeout-npnzh\naffinity-clusterip-timeout-npnzh\naffinity-clusterip-timeout-npnzh\naffinity-clusterip-timeout-npnzh\naffinity-clusterip-timeout-npnzh\naffinity-clusterip-timeout-npnzh\naffinity-clusterip-timeout-npnzh\naffinity-clusterip-timeout-npnzh" +Oct 27 15:21:47.608: INFO: Received response from host: affinity-clusterip-timeout-npnzh +Oct 27 15:21:47.608: INFO: Received response from host: affinity-clusterip-timeout-npnzh +Oct 27 15:21:47.608: INFO: Received response from host: affinity-clusterip-timeout-npnzh +Oct 27 15:21:47.608: INFO: Received response from host: affinity-clusterip-timeout-npnzh +Oct 27 15:21:47.608: INFO: Received response from host: affinity-clusterip-timeout-npnzh +Oct 27 15:21:47.608: INFO: Received response from host: affinity-clusterip-timeout-npnzh +Oct 27 15:21:47.608: INFO: Received response from host: affinity-clusterip-timeout-npnzh +Oct 27 15:21:47.608: INFO: Received response from host: affinity-clusterip-timeout-npnzh +Oct 27 15:21:47.608: INFO: Received response from host: affinity-clusterip-timeout-npnzh +Oct 27 15:21:47.608: INFO: Received response from host: affinity-clusterip-timeout-npnzh +Oct 27 15:21:47.608: INFO: Received response from host: affinity-clusterip-timeout-npnzh +Oct 27 15:21:47.608: INFO: Received response from host: affinity-clusterip-timeout-npnzh +Oct 27 15:21:47.608: INFO: Received response from host: affinity-clusterip-timeout-npnzh +Oct 27 15:21:47.608: INFO: Received response from host: affinity-clusterip-timeout-npnzh +Oct 27 15:21:47.608: INFO: Received response from host: affinity-clusterip-timeout-npnzh +Oct 27 15:21:47.608: INFO: Received response from host: affinity-clusterip-timeout-npnzh +Oct 27 15:21:47.608: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-5820 exec execpod-affinity589m5 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.27.52.89:80/' +Oct 27 15:21:47.893: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://172.27.52.89:80/\n" +Oct 27 15:21:47.893: INFO: stdout: "affinity-clusterip-timeout-npnzh" +Oct 27 15:22:07.895: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-5820 exec execpod-affinity589m5 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.27.52.89:80/' +Oct 27 15:22:08.215: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://172.27.52.89:80/\n" +Oct 27 15:22:08.215: INFO: stdout: "affinity-clusterip-timeout-npnzh" +Oct 27 15:22:28.216: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-5820 exec execpod-affinity589m5 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.27.52.89:80/' +Oct 27 15:22:28.507: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://172.27.52.89:80/\n" +Oct 27 15:22:28.507: INFO: stdout: "affinity-clusterip-timeout-npnzh" +Oct 27 15:22:48.508: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-5820 exec execpod-affinity589m5 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.27.52.89:80/' +Oct 27 15:22:48.809: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://172.27.52.89:80/\n" +Oct 27 15:22:48.809: INFO: stdout: "affinity-clusterip-timeout-nmmxw" +Oct 27 15:22:48.809: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-5820, will wait for the garbage collector to delete the pods +Oct 27 15:22:48.890: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 6.214604ms +Oct 27 15:22:48.991: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 101.105837ms +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:22:51.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-5820" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":346,"completed":276,"skipped":4729,"failed":0} +SSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:22:51.315: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-533 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0777 on node default medium +Oct 27 15:22:51.475: INFO: Waiting up to 5m0s for pod "pod-591dff81-c83e-4de8-835c-5b44f4148aef" in namespace "emptydir-533" to be "Succeeded or Failed" +Oct 27 15:22:51.481: INFO: Pod "pod-591dff81-c83e-4de8-835c-5b44f4148aef": Phase="Pending", Reason="", readiness=false. Elapsed: 5.502908ms +Oct 27 15:22:53.486: INFO: Pod "pod-591dff81-c83e-4de8-835c-5b44f4148aef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011017742s +STEP: Saw pod success +Oct 27 15:22:53.486: INFO: Pod "pod-591dff81-c83e-4de8-835c-5b44f4148aef" satisfied condition "Succeeded or Failed" +Oct 27 15:22:53.491: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod pod-591dff81-c83e-4de8-835c-5b44f4148aef container test-container: +STEP: delete the pod +Oct 27 15:22:53.509: INFO: Waiting for pod pod-591dff81-c83e-4de8-835c-5b44f4148aef to disappear +Oct 27 15:22:53.514: INFO: Pod pod-591dff81-c83e-4de8-835c-5b44f4148aef no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:22:53.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-533" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":277,"skipped":4735,"failed":0} +SSSSSSSS +------------------------------ +[sig-apps] ReplicaSet + Replicaset should have a working scale subresource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:22:53.527: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename replicaset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replicaset-7341 +STEP: Waiting for a default service account to be provisioned in namespace +[It] Replicaset should have a working scale subresource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating replica set "test-rs" that asks for more than the allowed pod quota +Oct 27 15:22:53.684: INFO: Pod name sample-pod: Found 0 pods out of 1 +Oct 27 15:22:58.690: INFO: Pod name sample-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running +STEP: getting scale subresource +STEP: updating a scale subresource +STEP: verifying the replicaset Spec.Replicas was modified +STEP: Patch a scale subresource +[AfterEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:22:58.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replicaset-7341" for this suite. +•{"msg":"PASSED [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]","total":346,"completed":278,"skipped":4743,"failed":0} +SSSS +------------------------------ +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + should include custom resource definition resources in discovery documents [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:22:58.727: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename custom-resource-definition +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in custom-resource-definition-5419 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should include custom resource definition resources in discovery documents [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: fetching the /apis discovery document +STEP: finding the apiextensions.k8s.io API group in the /apis discovery document +STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document +STEP: fetching the /apis/apiextensions.k8s.io discovery document +STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document +STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document +STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:22:58.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "custom-resource-definition-5419" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":346,"completed":279,"skipped":4747,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should unconditionally reject operations on fail closed webhook [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:22:58.924: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-1327 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 27 15:22:59.607: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 27 15:23:02.631: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should unconditionally reject operations on fail closed webhook [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API +STEP: create a namespace for the webhook +STEP: create a configmap should be unconditionally rejected by the webhook +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:23:02.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-1327" for this suite. +STEP: Destroying namespace "webhook-1327-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":346,"completed":280,"skipped":4785,"failed":0} +SSSS +------------------------------ +[sig-node] Kubelet when scheduling a busybox command that always fails in a pod + should be possible to delete [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:23:02.897: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubelet-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubelet-test-8967 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 +[BeforeEach] when scheduling a busybox command that always fails in a pod + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:82 +[It] should be possible to delete [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:23:03.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubelet-test-8967" for this suite. +•{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":346,"completed":281,"skipped":4789,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:23:03.080: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-7491 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name configmap-test-volume-map-cd19f205-9957-4112-906d-a9042df2369c +STEP: Creating a pod to test consume configMaps +Oct 27 15:23:03.241: INFO: Waiting up to 5m0s for pod "pod-configmaps-0629450e-392c-4dc0-b16a-ba716d7c6c29" in namespace "configmap-7491" to be "Succeeded or Failed" +Oct 27 15:23:03.245: INFO: Pod "pod-configmaps-0629450e-392c-4dc0-b16a-ba716d7c6c29": Phase="Pending", Reason="", readiness=false. Elapsed: 4.154727ms +Oct 27 15:23:05.251: INFO: Pod "pod-configmaps-0629450e-392c-4dc0-b16a-ba716d7c6c29": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010395787s +STEP: Saw pod success +Oct 27 15:23:05.251: INFO: Pod "pod-configmaps-0629450e-392c-4dc0-b16a-ba716d7c6c29" satisfied condition "Succeeded or Failed" +Oct 27 15:23:05.256: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod pod-configmaps-0629450e-392c-4dc0-b16a-ba716d7c6c29 container agnhost-container: +STEP: delete the pod +Oct 27 15:23:05.317: INFO: Waiting for pod pod-configmaps-0629450e-392c-4dc0-b16a-ba716d7c6c29 to disappear +Oct 27 15:23:05.321: INFO: Pod pod-configmaps-0629450e-392c-4dc0-b16a-ba716d7c6c29 no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:23:05.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-7491" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":346,"completed":282,"skipped":4831,"failed":0} +S +------------------------------ +[sig-apps] Daemon set [Serial] + should verify changes to a daemon set status [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:23:05.334: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename daemonsets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in daemonsets-5429 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:142 +[It] should verify changes to a daemon set status [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating simple DaemonSet "daemon-set" +STEP: Check that daemon pods launch on every node of the cluster. +Oct 27 15:23:05.526: INFO: Number of nodes with available pods: 0 +Oct 27 15:23:05.526: INFO: Node izgw81stpxs0bun38i01tfz is running more than one daemon pod +Oct 27 15:23:06.539: INFO: Number of nodes with available pods: 0 +Oct 27 15:23:06.539: INFO: Node izgw81stpxs0bun38i01tfz is running more than one daemon pod +Oct 27 15:23:07.539: INFO: Number of nodes with available pods: 2 +Oct 27 15:23:07.539: INFO: Number of running nodes: 2, number of available pods: 2 +STEP: Getting /status +Oct 27 15:23:07.548: INFO: Daemon Set daemon-set has Conditions: [] +STEP: updating the DaemonSet Status +Oct 27 15:23:07.559: INFO: updatedStatus.Conditions: []v1.DaemonSetCondition{v1.DaemonSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"E2E", Message:"Set from e2e test"}} +STEP: watching for the daemon set status to be updated +Oct 27 15:23:07.563: INFO: Observed &DaemonSet event: ADDED +Oct 27 15:23:07.563: INFO: Observed &DaemonSet event: MODIFIED +Oct 27 15:23:07.564: INFO: Observed &DaemonSet event: MODIFIED +Oct 27 15:23:07.564: INFO: Observed &DaemonSet event: MODIFIED +Oct 27 15:23:07.564: INFO: Observed &DaemonSet event: MODIFIED +Oct 27 15:23:07.564: INFO: Found daemon set daemon-set in namespace daemonsets-5429 with labels: map[daemonset-name:daemon-set] annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] +Oct 27 15:23:07.564: INFO: Daemon set daemon-set has an updated status +STEP: patching the DaemonSet Status +STEP: watching for the daemon set status to be patched +Oct 27 15:23:07.575: INFO: Observed &DaemonSet event: ADDED +Oct 27 15:23:07.575: INFO: Observed &DaemonSet event: MODIFIED +Oct 27 15:23:07.575: INFO: Observed &DaemonSet event: MODIFIED +Oct 27 15:23:07.575: INFO: Observed &DaemonSet event: MODIFIED +Oct 27 15:23:07.575: INFO: Observed &DaemonSet event: MODIFIED +Oct 27 15:23:07.575: INFO: Observed daemon set daemon-set in namespace daemonsets-5429 with annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] +Oct 27 15:23:07.576: INFO: Observed &DaemonSet event: MODIFIED +Oct 27 15:23:07.576: INFO: Found daemon set daemon-set in namespace daemonsets-5429 with labels: map[daemonset-name:daemon-set] annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusPatched True 0001-01-01 00:00:00 +0000 UTC }] +Oct 27 15:23:07.576: INFO: Daemon set daemon-set has a patched status +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:108 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5429, will wait for the garbage collector to delete the pods +Oct 27 15:23:07.641: INFO: Deleting DaemonSet.extensions daemon-set took: 6.289881ms +Oct 27 15:23:07.742: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.685927ms +Oct 27 15:23:10.347: INFO: Number of nodes with available pods: 0 +Oct 27 15:23:10.348: INFO: Number of running nodes: 0, number of available pods: 0 +Oct 27 15:23:10.351: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"38691"},"items":null} + +Oct 27 15:23:10.356: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"38693"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:23:10.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-5429" for this suite. +•{"msg":"PASSED [sig-apps] Daemon set [Serial] should verify changes to a daemon set status [Conformance]","total":346,"completed":283,"skipped":4832,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:23:10.382: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-4780 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating service in namespace services-4780 +STEP: creating service affinity-clusterip in namespace services-4780 +STEP: creating replication controller affinity-clusterip in namespace services-4780 +I1027 15:23:10.540512 5703 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-4780, replica count: 3 +I1027 15:23:13.592801 5703 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Oct 27 15:23:13.602: INFO: Creating new exec pod +Oct 27 15:23:16.625: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-4780 exec execpod-affinityfvxsh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' +Oct 27 15:23:16.954: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" +Oct 27 15:23:16.954: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 15:23:16.954: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-4780 exec execpod-affinityfvxsh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.30.111.157 80' +Oct 27 15:23:17.276: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.30.111.157 80\nConnection to 172.30.111.157 80 port [tcp/http] succeeded!\n" +Oct 27 15:23:17.276: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 15:23:17.276: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-4780 exec execpod-affinityfvxsh -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.30.111.157:80/ ; done' +Oct 27 15:23:17.612: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.111.157:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.111.157:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.111.157:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.111.157:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.111.157:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.111.157:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.111.157:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.111.157:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.111.157:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.111.157:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.111.157:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.111.157:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.111.157:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.111.157:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.111.157:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.111.157:80/\n" +Oct 27 15:23:17.612: INFO: stdout: "\naffinity-clusterip-244bp\naffinity-clusterip-244bp\naffinity-clusterip-244bp\naffinity-clusterip-244bp\naffinity-clusterip-244bp\naffinity-clusterip-244bp\naffinity-clusterip-244bp\naffinity-clusterip-244bp\naffinity-clusterip-244bp\naffinity-clusterip-244bp\naffinity-clusterip-244bp\naffinity-clusterip-244bp\naffinity-clusterip-244bp\naffinity-clusterip-244bp\naffinity-clusterip-244bp\naffinity-clusterip-244bp" +Oct 27 15:23:17.612: INFO: Received response from host: affinity-clusterip-244bp +Oct 27 15:23:17.612: INFO: Received response from host: affinity-clusterip-244bp +Oct 27 15:23:17.612: INFO: Received response from host: affinity-clusterip-244bp +Oct 27 15:23:17.612: INFO: Received response from host: affinity-clusterip-244bp +Oct 27 15:23:17.612: INFO: Received response from host: affinity-clusterip-244bp +Oct 27 15:23:17.612: INFO: Received response from host: affinity-clusterip-244bp +Oct 27 15:23:17.612: INFO: Received response from host: affinity-clusterip-244bp +Oct 27 15:23:17.612: INFO: Received response from host: affinity-clusterip-244bp +Oct 27 15:23:17.612: INFO: Received response from host: affinity-clusterip-244bp +Oct 27 15:23:17.612: INFO: Received response from host: affinity-clusterip-244bp +Oct 27 15:23:17.612: INFO: Received response from host: affinity-clusterip-244bp +Oct 27 15:23:17.612: INFO: Received response from host: affinity-clusterip-244bp +Oct 27 15:23:17.612: INFO: Received response from host: affinity-clusterip-244bp +Oct 27 15:23:17.612: INFO: Received response from host: affinity-clusterip-244bp +Oct 27 15:23:17.612: INFO: Received response from host: affinity-clusterip-244bp +Oct 27 15:23:17.612: INFO: Received response from host: affinity-clusterip-244bp +Oct 27 15:23:17.612: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-clusterip in namespace services-4780, will wait for the garbage collector to delete the pods +Oct 27 15:23:17.690: INFO: Deleting ReplicationController affinity-clusterip took: 6.43495ms +Oct 27 15:23:17.791: INFO: Terminating ReplicationController affinity-clusterip pods took: 100.583114ms +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:23:19.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-4780" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":346,"completed":284,"skipped":4855,"failed":0} +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should be able to change the type from NodePort to ExternalName [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:23:19.414: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-6539 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should be able to change the type from NodePort to ExternalName [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a service nodeport-service with the type=NodePort in namespace services-6539 +STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service +STEP: creating service externalsvc in namespace services-6539 +STEP: creating replication controller externalsvc in namespace services-6539 +I1027 15:23:19.587199 5703 runners.go:190] Created replication controller with name: externalsvc, namespace: services-6539, replica count: 2 +I1027 15:23:22.638908 5703 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +STEP: changing the NodePort service to type=ExternalName +Oct 27 15:23:22.658: INFO: Creating new exec pod +Oct 27 15:23:24.678: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-6539 exec execpodrvl22 -- /bin/sh -x -c nslookup nodeport-service.services-6539.svc.cluster.local' +Oct 27 15:23:24.948: INFO: stderr: "+ nslookup nodeport-service.services-6539.svc.cluster.local\n" +Oct 27 15:23:24.948: INFO: stdout: "Server:\t\t172.24.0.10\nAddress:\t172.24.0.10#53\n\nnodeport-service.services-6539.svc.cluster.local\tcanonical name = externalsvc.services-6539.svc.cluster.local.\nName:\texternalsvc.services-6539.svc.cluster.local\nAddress: 172.25.234.155\n\n" +STEP: deleting ReplicationController externalsvc in namespace services-6539, will wait for the garbage collector to delete the pods +Oct 27 15:23:25.010: INFO: Deleting ReplicationController externalsvc took: 6.646385ms +Oct 27 15:23:25.111: INFO: Terminating ReplicationController externalsvc pods took: 100.50213ms +Oct 27 15:23:27.322: INFO: Cleaning up the NodePort to ExternalName test service +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:23:27.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-6539" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":346,"completed":285,"skipped":4876,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-node] ConfigMap + should be consumable via environment variable [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:23:27.369: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-9332 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable via environment variable [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap configmap-9332/configmap-test-ea9b09f8-b6b8-4f90-abe8-1811f8431ef3 +STEP: Creating a pod to test consume configMaps +Oct 27 15:23:27.532: INFO: Waiting up to 5m0s for pod "pod-configmaps-4ced8f80-c705-485d-ab76-cce42ae70529" in namespace "configmap-9332" to be "Succeeded or Failed" +Oct 27 15:23:27.537: INFO: Pod "pod-configmaps-4ced8f80-c705-485d-ab76-cce42ae70529": Phase="Pending", Reason="", readiness=false. Elapsed: 5.221648ms +Oct 27 15:23:29.543: INFO: Pod "pod-configmaps-4ced8f80-c705-485d-ab76-cce42ae70529": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010896814s +STEP: Saw pod success +Oct 27 15:23:29.543: INFO: Pod "pod-configmaps-4ced8f80-c705-485d-ab76-cce42ae70529" satisfied condition "Succeeded or Failed" +Oct 27 15:23:29.548: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod pod-configmaps-4ced8f80-c705-485d-ab76-cce42ae70529 container env-test: +STEP: delete the pod +Oct 27 15:23:29.565: INFO: Waiting for pod pod-configmaps-4ced8f80-c705-485d-ab76-cce42ae70529 to disappear +Oct 27 15:23:29.569: INFO: Pod pod-configmaps-4ced8f80-c705-485d-ab76-cce42ae70529 no longer exists +[AfterEach] [sig-node] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:23:29.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-9332" for this suite. +•{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":346,"completed":286,"skipped":4890,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Security Context when creating containers with AllowPrivilegeEscalation + should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:23:29.583: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename security-context-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-test-2637 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 +[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:23:29.744: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-004e2e05-f821-490c-8358-102a22829ef7" in namespace "security-context-test-2637" to be "Succeeded or Failed" +Oct 27 15:23:29.749: INFO: Pod "alpine-nnp-false-004e2e05-f821-490c-8358-102a22829ef7": Phase="Pending", Reason="", readiness=false. Elapsed: 5.530317ms +Oct 27 15:23:31.755: INFO: Pod "alpine-nnp-false-004e2e05-f821-490c-8358-102a22829ef7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011119593s +Oct 27 15:23:33.762: INFO: Pod "alpine-nnp-false-004e2e05-f821-490c-8358-102a22829ef7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017924165s +Oct 27 15:23:33.762: INFO: Pod "alpine-nnp-false-004e2e05-f821-490c-8358-102a22829ef7" satisfied condition "Succeeded or Failed" +[AfterEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:23:33.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "security-context-test-2637" for this suite. +•{"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":287,"skipped":4915,"failed":0} +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:23:33.827: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-8714 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0666 on tmpfs +Oct 27 15:23:33.991: INFO: Waiting up to 5m0s for pod "pod-b1c47b2d-4cb2-4044-a199-711983c9a6cc" in namespace "emptydir-8714" to be "Succeeded or Failed" +Oct 27 15:23:33.995: INFO: Pod "pod-b1c47b2d-4cb2-4044-a199-711983c9a6cc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.213616ms +Oct 27 15:23:36.001: INFO: Pod "pod-b1c47b2d-4cb2-4044-a199-711983c9a6cc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.01003361s +STEP: Saw pod success +Oct 27 15:23:36.001: INFO: Pod "pod-b1c47b2d-4cb2-4044-a199-711983c9a6cc" satisfied condition "Succeeded or Failed" +Oct 27 15:23:36.006: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod pod-b1c47b2d-4cb2-4044-a199-711983c9a6cc container test-container: +STEP: delete the pod +Oct 27 15:23:36.025: INFO: Waiting for pod pod-b1c47b2d-4cb2-4044-a199-711983c9a6cc to disappear +Oct 27 15:23:36.029: INFO: Pod pod-b1c47b2d-4cb2-4044-a199-711983c9a6cc no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:23:36.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-8714" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":288,"skipped":4934,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should mutate custom resource with different stored version [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:23:36.042: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-3246 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 27 15:23:36.787: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 27 15:23:39.813: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should mutate custom resource with different stored version [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:23:39.819: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Registering the mutating webhook for custom resource e2e-test-webhook-7494-crds.webhook.example.com via the AdmissionRegistration API +STEP: Creating a custom resource while v1 is storage version +STEP: Patching Custom Resource Definition to set v2 as storage +STEP: Patching the custom resource while v2 is storage version +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:23:43.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-3246" for this suite. +STEP: Destroying namespace "webhook-3246-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":346,"completed":289,"skipped":4970,"failed":0} +SSSSSS +------------------------------ +[sig-node] Downward API + should provide pod UID as env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:23:43.233: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-1867 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide pod UID as env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward api env vars +Oct 27 15:23:43.411: INFO: Waiting up to 5m0s for pod "downward-api-18dad396-e75f-4253-ad48-08a6aabba28a" in namespace "downward-api-1867" to be "Succeeded or Failed" +Oct 27 15:23:43.415: INFO: Pod "downward-api-18dad396-e75f-4253-ad48-08a6aabba28a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.443523ms +Oct 27 15:23:45.421: INFO: Pod "downward-api-18dad396-e75f-4253-ad48-08a6aabba28a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010510247s +STEP: Saw pod success +Oct 27 15:23:45.421: INFO: Pod "downward-api-18dad396-e75f-4253-ad48-08a6aabba28a" satisfied condition "Succeeded or Failed" +Oct 27 15:23:45.426: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod downward-api-18dad396-e75f-4253-ad48-08a6aabba28a container dapi-container: +STEP: delete the pod +Oct 27 15:23:45.448: INFO: Waiting for pod downward-api-18dad396-e75f-4253-ad48-08a6aabba28a to disappear +Oct 27 15:23:45.452: INFO: Pod downward-api-18dad396-e75f-4253-ad48-08a6aabba28a no longer exists +[AfterEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:23:45.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-1867" for this suite. +•{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":346,"completed":290,"skipped":4976,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:23:45.465: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-1128 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0777 on tmpfs +Oct 27 15:23:45.624: INFO: Waiting up to 5m0s for pod "pod-db9c930b-cea6-4309-ac62-c0c15ab3dce1" in namespace "emptydir-1128" to be "Succeeded or Failed" +Oct 27 15:23:45.629: INFO: Pod "pod-db9c930b-cea6-4309-ac62-c0c15ab3dce1": Phase="Pending", Reason="", readiness=false. Elapsed: 5.306062ms +Oct 27 15:23:47.635: INFO: Pod "pod-db9c930b-cea6-4309-ac62-c0c15ab3dce1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010695054s +STEP: Saw pod success +Oct 27 15:23:47.635: INFO: Pod "pod-db9c930b-cea6-4309-ac62-c0c15ab3dce1" satisfied condition "Succeeded or Failed" +Oct 27 15:23:47.639: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod pod-db9c930b-cea6-4309-ac62-c0c15ab3dce1 container test-container: +STEP: delete the pod +Oct 27 15:23:47.659: INFO: Waiting for pod pod-db9c930b-cea6-4309-ac62-c0c15ab3dce1 to disappear +Oct 27 15:23:47.663: INFO: Pod pod-db9c930b-cea6-4309-ac62-c0c15ab3dce1 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:23:47.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-1128" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":291,"skipped":5002,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-node] Kubelet when scheduling a read only busybox container + should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:23:47.676: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubelet-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubelet-test-1068 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 +[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:23:47.841: INFO: The status of Pod busybox-readonly-fsfd7e6f84-5696-4668-b771-b69366ed9f75 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:23:49.847: INFO: The status of Pod busybox-readonly-fsfd7e6f84-5696-4668-b771-b69366ed9f75 is Running (Ready = true) +[AfterEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:23:49.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubelet-test-1068" for this suite. +•{"msg":"PASSED [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":292,"skipped":5015,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide container's cpu limit [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:23:49.877: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-6232 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should provide container's cpu limit [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 27 15:23:50.042: INFO: Waiting up to 5m0s for pod "downwardapi-volume-153c565d-d3a9-436a-bf1d-bb9f62c2e27b" in namespace "projected-6232" to be "Succeeded or Failed" +Oct 27 15:23:50.046: INFO: Pod "downwardapi-volume-153c565d-d3a9-436a-bf1d-bb9f62c2e27b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.237481ms +Oct 27 15:23:52.053: INFO: Pod "downwardapi-volume-153c565d-d3a9-436a-bf1d-bb9f62c2e27b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010631194s +STEP: Saw pod success +Oct 27 15:23:52.053: INFO: Pod "downwardapi-volume-153c565d-d3a9-436a-bf1d-bb9f62c2e27b" satisfied condition "Succeeded or Failed" +Oct 27 15:23:52.057: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod downwardapi-volume-153c565d-d3a9-436a-bf1d-bb9f62c2e27b container client-container: +STEP: delete the pod +Oct 27 15:23:52.084: INFO: Waiting for pod downwardapi-volume-153c565d-d3a9-436a-bf1d-bb9f62c2e27b to disappear +Oct 27 15:23:52.088: INFO: Pod downwardapi-volume-153c565d-d3a9-436a-bf1d-bb9f62c2e27b no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:23:52.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-6232" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":346,"completed":293,"skipped":5086,"failed":0} +S +------------------------------ +[sig-cli] Kubectl client Kubectl version + should check is all data is printed [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:23:52.101: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-6364 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should check is all data is printed [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:23:52.251: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-6364 version' +Oct 27 15:23:52.323: INFO: stderr: "" +Oct 27 15:23:52.323: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"22\", GitVersion:\"v1.22.2\", GitCommit:\"8b5a19147530eaac9476b0ab82980b4088bbc1b2\", GitTreeState:\"clean\", BuildDate:\"2021-09-15T21:38:50Z\", GoVersion:\"go1.16.8\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"22\", GitVersion:\"v1.22.2\", GitCommit:\"8b5a19147530eaac9476b0ab82980b4088bbc1b2\", GitTreeState:\"clean\", BuildDate:\"2021-09-15T21:32:41Z\", GoVersion:\"go1.16.8\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:23:52.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-6364" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":346,"completed":294,"skipped":5087,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected combined + should project all components that make up the projection API [Projection][NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected combined + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:23:52.334: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-9598 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name configmap-projected-all-test-volume-f2c2aae2-d6d0-41d4-92f1-5e320d27da9b +STEP: Creating secret with name secret-projected-all-test-volume-f3f769d4-8036-4001-b668-ca7c9c4fbbf8 +STEP: Creating a pod to test Check all projections for projected volume plugin +Oct 27 15:23:52.504: INFO: Waiting up to 5m0s for pod "projected-volume-68ab941d-6067-46f6-8351-9fdfa1eb4e35" in namespace "projected-9598" to be "Succeeded or Failed" +Oct 27 15:23:52.509: INFO: Pod "projected-volume-68ab941d-6067-46f6-8351-9fdfa1eb4e35": Phase="Pending", Reason="", readiness=false. Elapsed: 4.15548ms +Oct 27 15:23:54.514: INFO: Pod "projected-volume-68ab941d-6067-46f6-8351-9fdfa1eb4e35": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009762415s +STEP: Saw pod success +Oct 27 15:23:54.514: INFO: Pod "projected-volume-68ab941d-6067-46f6-8351-9fdfa1eb4e35" satisfied condition "Succeeded or Failed" +Oct 27 15:23:54.519: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod projected-volume-68ab941d-6067-46f6-8351-9fdfa1eb4e35 container projected-all-volume-test: +STEP: delete the pod +Oct 27 15:23:54.539: INFO: Waiting for pod projected-volume-68ab941d-6067-46f6-8351-9fdfa1eb4e35 to disappear +Oct 27 15:23:54.543: INFO: Pod projected-volume-68ab941d-6067-46f6-8351-9fdfa1eb4e35 no longer exists +[AfterEach] [sig-storage] Projected combined + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:23:54.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-9598" for this suite. +•{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":346,"completed":295,"skipped":5125,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:23:54.556: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-2943 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name projected-configmap-test-volume-f75ae81b-bb10-4cf5-911a-8c2f671b7979 +STEP: Creating a pod to test consume configMaps +Oct 27 15:23:54.723: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6b5c6d87-500b-4f83-9b3a-b9953039c38b" in namespace "projected-2943" to be "Succeeded or Failed" +Oct 27 15:23:54.728: INFO: Pod "pod-projected-configmaps-6b5c6d87-500b-4f83-9b3a-b9953039c38b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.364678ms +Oct 27 15:23:56.734: INFO: Pod "pod-projected-configmaps-6b5c6d87-500b-4f83-9b3a-b9953039c38b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010386628s +STEP: Saw pod success +Oct 27 15:23:56.734: INFO: Pod "pod-projected-configmaps-6b5c6d87-500b-4f83-9b3a-b9953039c38b" satisfied condition "Succeeded or Failed" +Oct 27 15:23:56.739: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod pod-projected-configmaps-6b5c6d87-500b-4f83-9b3a-b9953039c38b container projected-configmap-volume-test: +STEP: delete the pod +Oct 27 15:23:56.777: INFO: Waiting for pod pod-projected-configmaps-6b5c6d87-500b-4f83-9b3a-b9953039c38b to disappear +Oct 27 15:23:56.781: INFO: Pod pod-projected-configmaps-6b5c6d87-500b-4f83-9b3a-b9953039c38b no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:23:56.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-2943" for this suite. +•{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":346,"completed":296,"skipped":5189,"failed":0} +S +------------------------------ +[sig-network] Services + should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:23:56.795: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-5391 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating service in namespace services-5391 +Oct 27 15:23:56.968: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:23:58.973: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true) +Oct 27 15:23:58.978: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-5391 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' +Oct 27 15:23:59.278: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" +Oct 27 15:23:59.278: INFO: stdout: "iptables" +Oct 27 15:23:59.278: INFO: proxyMode: iptables +Oct 27 15:23:59.286: INFO: Waiting for pod kube-proxy-mode-detector to disappear +Oct 27 15:23:59.290: INFO: Pod kube-proxy-mode-detector no longer exists +STEP: creating service affinity-nodeport-timeout in namespace services-5391 +STEP: creating replication controller affinity-nodeport-timeout in namespace services-5391 +I1027 15:23:59.310462 5703 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-5391, replica count: 3 +I1027 15:24:02.362756 5703 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Oct 27 15:24:02.379: INFO: Creating new exec pod +Oct 27 15:24:05.414: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-5391 exec execpod-affinityt8qtj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' +Oct 27 15:24:05.711: INFO: stderr: "+ nc -v -t -w 2 affinity-nodeport-timeout 80\n+ echo hostName\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" +Oct 27 15:24:05.711: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 15:24:05.711: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-5391 exec execpod-affinityt8qtj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.28.64.122 80' +Oct 27 15:24:06.051: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.28.64.122 80\nConnection to 172.28.64.122 80 port [tcp/http] succeeded!\n" +Oct 27 15:24:06.051: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 15:24:06.051: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-5391 exec execpod-affinityt8qtj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.250.8.34 32247' +Oct 27 15:24:06.317: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.250.8.34 32247\nConnection to 10.250.8.34 32247 port [tcp/*] succeeded!\n" +Oct 27 15:24:06.317: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 15:24:06.317: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-5391 exec execpod-affinityt8qtj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.250.8.35 32247' +Oct 27 15:24:06.582: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.250.8.35 32247\nConnection to 10.250.8.35 32247 port [tcp/*] succeeded!\n" +Oct 27 15:24:06.582: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 15:24:06.582: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-5391 exec execpod-affinityt8qtj -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.250.8.34:32247/ ; done' +Oct 27 15:24:06.997: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:32247/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:32247/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:32247/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:32247/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:32247/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:32247/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:32247/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:32247/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:32247/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:32247/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:32247/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:32247/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:32247/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:32247/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:32247/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.8.34:32247/\n" +Oct 27 15:24:06.997: INFO: stdout: "\naffinity-nodeport-timeout-vqxgp\naffinity-nodeport-timeout-vqxgp\naffinity-nodeport-timeout-vqxgp\naffinity-nodeport-timeout-vqxgp\naffinity-nodeport-timeout-vqxgp\naffinity-nodeport-timeout-vqxgp\naffinity-nodeport-timeout-vqxgp\naffinity-nodeport-timeout-vqxgp\naffinity-nodeport-timeout-vqxgp\naffinity-nodeport-timeout-vqxgp\naffinity-nodeport-timeout-vqxgp\naffinity-nodeport-timeout-vqxgp\naffinity-nodeport-timeout-vqxgp\naffinity-nodeport-timeout-vqxgp\naffinity-nodeport-timeout-vqxgp\naffinity-nodeport-timeout-vqxgp" +Oct 27 15:24:06.997: INFO: Received response from host: affinity-nodeport-timeout-vqxgp +Oct 27 15:24:06.997: INFO: Received response from host: affinity-nodeport-timeout-vqxgp +Oct 27 15:24:06.997: INFO: Received response from host: affinity-nodeport-timeout-vqxgp +Oct 27 15:24:06.997: INFO: Received response from host: affinity-nodeport-timeout-vqxgp +Oct 27 15:24:06.997: INFO: Received response from host: affinity-nodeport-timeout-vqxgp +Oct 27 15:24:06.997: INFO: Received response from host: affinity-nodeport-timeout-vqxgp +Oct 27 15:24:06.997: INFO: Received response from host: affinity-nodeport-timeout-vqxgp +Oct 27 15:24:06.997: INFO: Received response from host: affinity-nodeport-timeout-vqxgp +Oct 27 15:24:06.997: INFO: Received response from host: affinity-nodeport-timeout-vqxgp +Oct 27 15:24:06.997: INFO: Received response from host: affinity-nodeport-timeout-vqxgp +Oct 27 15:24:06.997: INFO: Received response from host: affinity-nodeport-timeout-vqxgp +Oct 27 15:24:06.997: INFO: Received response from host: affinity-nodeport-timeout-vqxgp +Oct 27 15:24:06.997: INFO: Received response from host: affinity-nodeport-timeout-vqxgp +Oct 27 15:24:06.997: INFO: Received response from host: affinity-nodeport-timeout-vqxgp +Oct 27 15:24:06.997: INFO: Received response from host: affinity-nodeport-timeout-vqxgp +Oct 27 15:24:06.997: INFO: Received response from host: affinity-nodeport-timeout-vqxgp +Oct 27 15:24:06.997: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-5391 exec execpod-affinityt8qtj -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.250.8.34:32247/' +Oct 27 15:24:07.277: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.250.8.34:32247/\n" +Oct 27 15:24:07.277: INFO: stdout: "affinity-nodeport-timeout-vqxgp" +Oct 27 15:24:27.279: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-5391 exec execpod-affinityt8qtj -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.250.8.34:32247/' +Oct 27 15:24:27.548: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.250.8.34:32247/\n" +Oct 27 15:24:27.548: INFO: stdout: "affinity-nodeport-timeout-58kf2" +Oct 27 15:24:27.548: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-5391, will wait for the garbage collector to delete the pods +Oct 27 15:24:27.622: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 6.021944ms +Oct 27 15:24:27.722: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 100.409447ms +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:24:29.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-5391" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":346,"completed":297,"skipped":5190,"failed":0} +SSS +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:24:29.650: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-3321 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating secret with name secret-test-map-ebc38626-60d2-4e28-9aa4-1c437ee5d7dd +STEP: Creating a pod to test consume secrets +Oct 27 15:24:29.819: INFO: Waiting up to 5m0s for pod "pod-secrets-3a027a09-d9dd-4af4-aeae-7b0a76795035" in namespace "secrets-3321" to be "Succeeded or Failed" +Oct 27 15:24:29.823: INFO: Pod "pod-secrets-3a027a09-d9dd-4af4-aeae-7b0a76795035": Phase="Pending", Reason="", readiness=false. Elapsed: 4.639123ms +Oct 27 15:24:31.829: INFO: Pod "pod-secrets-3a027a09-d9dd-4af4-aeae-7b0a76795035": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010407319s +STEP: Saw pod success +Oct 27 15:24:31.829: INFO: Pod "pod-secrets-3a027a09-d9dd-4af4-aeae-7b0a76795035" satisfied condition "Succeeded or Failed" +Oct 27 15:24:31.834: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod pod-secrets-3a027a09-d9dd-4af4-aeae-7b0a76795035 container secret-volume-test: +STEP: delete the pod +Oct 27 15:24:31.852: INFO: Waiting for pod pod-secrets-3a027a09-d9dd-4af4-aeae-7b0a76795035 to disappear +Oct 27 15:24:31.856: INFO: Pod pod-secrets-3a027a09-d9dd-4af4-aeae-7b0a76795035 no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:24:31.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-3321" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":298,"skipped":5193,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should retry creating failed daemon pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:24:31.871: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename daemonsets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in daemonsets-2325 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:142 +[It] should retry creating failed daemon pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a simple DaemonSet "daemon-set" +STEP: Check that daemon pods launch on every node of the cluster. +Oct 27 15:24:32.055: INFO: Number of nodes with available pods: 0 +Oct 27 15:24:32.055: INFO: Node izgw81stpxs0bun38i01tfz is running more than one daemon pod +Oct 27 15:24:33.069: INFO: Number of nodes with available pods: 0 +Oct 27 15:24:33.069: INFO: Node izgw81stpxs0bun38i01tfz is running more than one daemon pod +Oct 27 15:24:34.068: INFO: Number of nodes with available pods: 2 +Oct 27 15:24:34.068: INFO: Number of running nodes: 2, number of available pods: 2 +STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. +Oct 27 15:24:34.178: INFO: Number of nodes with available pods: 1 +Oct 27 15:24:34.178: INFO: Node izgw89f23rpcwrl79tpgp1z is running more than one daemon pod +Oct 27 15:24:35.192: INFO: Number of nodes with available pods: 1 +Oct 27 15:24:35.192: INFO: Node izgw89f23rpcwrl79tpgp1z is running more than one daemon pod +Oct 27 15:24:36.192: INFO: Number of nodes with available pods: 2 +Oct 27 15:24:36.192: INFO: Number of running nodes: 2, number of available pods: 2 +STEP: Wait for the failed daemon pod to be completely deleted. +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:108 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2325, will wait for the garbage collector to delete the pods +Oct 27 15:24:36.263: INFO: Deleting DaemonSet.extensions daemon-set took: 6.848372ms +Oct 27 15:24:36.363: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.430984ms +Oct 27 15:24:38.669: INFO: Number of nodes with available pods: 0 +Oct 27 15:24:38.669: INFO: Number of running nodes: 0, number of available pods: 0 +Oct 27 15:24:38.673: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"39745"},"items":null} + +Oct 27 15:24:38.677: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"39745"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:24:38.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-2325" for this suite. +•{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":346,"completed":299,"skipped":5259,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + should perform canary updates and phased rolling updates of template modifications [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:24:38.711: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename statefulset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-7559 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:92 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:107 +STEP: Creating service test in namespace statefulset-7559 +[It] should perform canary updates and phased rolling updates of template modifications [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a new StatefulSet +Oct 27 15:24:38.881: INFO: Found 0 stateful pods, waiting for 3 +Oct 27 15:24:48.890: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true +Oct 27 15:24:48.890: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true +Oct 27 15:24:48.890: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true +STEP: Updating stateful set template: update image from k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 to k8s.gcr.io/e2e-test-images/httpd:2.4.39-1 +Oct 27 15:24:48.930: INFO: Updating stateful set ss2 +STEP: Creating a new revision +STEP: Not applying an update when the partition is greater than the number of replicas +STEP: Performing a canary update +Oct 27 15:24:58.974: INFO: Updating stateful set ss2 +Oct 27 15:24:58.989: INFO: Waiting for Pod statefulset-7559/ss2-2 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 +STEP: Restoring Pods to the correct revision when they are deleted +Oct 27 15:25:09.029: INFO: Found 1 stateful pods, waiting for 3 +Oct 27 15:25:19.037: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true +Oct 27 15:25:19.037: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true +Oct 27 15:25:19.037: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true +STEP: Performing a phased rolling update +Oct 27 15:25:19.069: INFO: Updating stateful set ss2 +Oct 27 15:25:19.078: INFO: Waiting for Pod statefulset-7559/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 +Oct 27 15:25:29.112: INFO: Updating stateful set ss2 +Oct 27 15:25:29.130: INFO: Waiting for StatefulSet statefulset-7559/ss2 to complete update +Oct 27 15:25:29.130: INFO: Waiting for Pod statefulset-7559/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118 +Oct 27 15:25:39.142: INFO: Deleting all statefulset in ns statefulset-7559 +Oct 27 15:25:39.147: INFO: Scaling statefulset ss2 to 0 +Oct 27 15:25:49.175: INFO: Waiting for statefulset status.replicas updated to 0 +Oct 27 15:25:49.180: INFO: Deleting statefulset ss2 +[AfterEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:25:49.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-7559" for this suite. +•{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":346,"completed":300,"skipped":5297,"failed":0} +SS +------------------------------ +[sig-api-machinery] Discovery + should validate PreferredVersion for each APIGroup [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Discovery + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:25:49.207: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename discovery +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in discovery-6394 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] Discovery + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/discovery.go:39 +STEP: Setting up server cert +[It] should validate PreferredVersion for each APIGroup [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:25:49.802: INFO: Checking APIGroup: apiregistration.k8s.io +Oct 27 15:25:49.805: INFO: PreferredVersion.GroupVersion: apiregistration.k8s.io/v1 +Oct 27 15:25:49.805: INFO: Versions found [{apiregistration.k8s.io/v1 v1}] +Oct 27 15:25:49.805: INFO: apiregistration.k8s.io/v1 matches apiregistration.k8s.io/v1 +Oct 27 15:25:49.805: INFO: Checking APIGroup: apps +Oct 27 15:25:49.808: INFO: PreferredVersion.GroupVersion: apps/v1 +Oct 27 15:25:49.808: INFO: Versions found [{apps/v1 v1}] +Oct 27 15:25:49.808: INFO: apps/v1 matches apps/v1 +Oct 27 15:25:49.808: INFO: Checking APIGroup: events.k8s.io +Oct 27 15:25:49.811: INFO: PreferredVersion.GroupVersion: events.k8s.io/v1 +Oct 27 15:25:49.811: INFO: Versions found [{events.k8s.io/v1 v1} {events.k8s.io/v1beta1 v1beta1}] +Oct 27 15:25:49.811: INFO: events.k8s.io/v1 matches events.k8s.io/v1 +Oct 27 15:25:49.811: INFO: Checking APIGroup: authentication.k8s.io +Oct 27 15:25:49.814: INFO: PreferredVersion.GroupVersion: authentication.k8s.io/v1 +Oct 27 15:25:49.814: INFO: Versions found [{authentication.k8s.io/v1 v1}] +Oct 27 15:25:49.814: INFO: authentication.k8s.io/v1 matches authentication.k8s.io/v1 +Oct 27 15:25:49.814: INFO: Checking APIGroup: authorization.k8s.io +Oct 27 15:25:49.818: INFO: PreferredVersion.GroupVersion: authorization.k8s.io/v1 +Oct 27 15:25:49.818: INFO: Versions found [{authorization.k8s.io/v1 v1}] +Oct 27 15:25:49.818: INFO: authorization.k8s.io/v1 matches authorization.k8s.io/v1 +Oct 27 15:25:49.818: INFO: Checking APIGroup: autoscaling +Oct 27 15:25:49.821: INFO: PreferredVersion.GroupVersion: autoscaling/v1 +Oct 27 15:25:49.821: INFO: Versions found [{autoscaling/v1 v1} {autoscaling/v2beta1 v2beta1} {autoscaling/v2beta2 v2beta2}] +Oct 27 15:25:49.821: INFO: autoscaling/v1 matches autoscaling/v1 +Oct 27 15:25:49.821: INFO: Checking APIGroup: batch +Oct 27 15:25:49.828: INFO: PreferredVersion.GroupVersion: batch/v1 +Oct 27 15:25:49.828: INFO: Versions found [{batch/v1 v1} {batch/v1beta1 v1beta1}] +Oct 27 15:25:49.828: INFO: batch/v1 matches batch/v1 +Oct 27 15:25:49.828: INFO: Checking APIGroup: certificates.k8s.io +Oct 27 15:25:49.833: INFO: PreferredVersion.GroupVersion: certificates.k8s.io/v1 +Oct 27 15:25:49.833: INFO: Versions found [{certificates.k8s.io/v1 v1}] +Oct 27 15:25:49.833: INFO: certificates.k8s.io/v1 matches certificates.k8s.io/v1 +Oct 27 15:25:49.833: INFO: Checking APIGroup: networking.k8s.io +Oct 27 15:25:49.836: INFO: PreferredVersion.GroupVersion: networking.k8s.io/v1 +Oct 27 15:25:49.836: INFO: Versions found [{networking.k8s.io/v1 v1}] +Oct 27 15:25:49.836: INFO: networking.k8s.io/v1 matches networking.k8s.io/v1 +Oct 27 15:25:49.836: INFO: Checking APIGroup: policy +Oct 27 15:25:49.839: INFO: PreferredVersion.GroupVersion: policy/v1 +Oct 27 15:25:49.839: INFO: Versions found [{policy/v1 v1} {policy/v1beta1 v1beta1}] +Oct 27 15:25:49.839: INFO: policy/v1 matches policy/v1 +Oct 27 15:25:49.839: INFO: Checking APIGroup: rbac.authorization.k8s.io +Oct 27 15:25:49.843: INFO: PreferredVersion.GroupVersion: rbac.authorization.k8s.io/v1 +Oct 27 15:25:49.843: INFO: Versions found [{rbac.authorization.k8s.io/v1 v1}] +Oct 27 15:25:49.843: INFO: rbac.authorization.k8s.io/v1 matches rbac.authorization.k8s.io/v1 +Oct 27 15:25:49.843: INFO: Checking APIGroup: storage.k8s.io +Oct 27 15:25:49.846: INFO: PreferredVersion.GroupVersion: storage.k8s.io/v1 +Oct 27 15:25:49.846: INFO: Versions found [{storage.k8s.io/v1 v1} {storage.k8s.io/v1beta1 v1beta1}] +Oct 27 15:25:49.846: INFO: storage.k8s.io/v1 matches storage.k8s.io/v1 +Oct 27 15:25:49.846: INFO: Checking APIGroup: admissionregistration.k8s.io +Oct 27 15:25:49.849: INFO: PreferredVersion.GroupVersion: admissionregistration.k8s.io/v1 +Oct 27 15:25:49.849: INFO: Versions found [{admissionregistration.k8s.io/v1 v1}] +Oct 27 15:25:49.849: INFO: admissionregistration.k8s.io/v1 matches admissionregistration.k8s.io/v1 +Oct 27 15:25:49.849: INFO: Checking APIGroup: apiextensions.k8s.io +Oct 27 15:25:49.852: INFO: PreferredVersion.GroupVersion: apiextensions.k8s.io/v1 +Oct 27 15:25:49.852: INFO: Versions found [{apiextensions.k8s.io/v1 v1}] +Oct 27 15:25:49.852: INFO: apiextensions.k8s.io/v1 matches apiextensions.k8s.io/v1 +Oct 27 15:25:49.852: INFO: Checking APIGroup: scheduling.k8s.io +Oct 27 15:25:49.855: INFO: PreferredVersion.GroupVersion: scheduling.k8s.io/v1 +Oct 27 15:25:49.855: INFO: Versions found [{scheduling.k8s.io/v1 v1}] +Oct 27 15:25:49.855: INFO: scheduling.k8s.io/v1 matches scheduling.k8s.io/v1 +Oct 27 15:25:49.855: INFO: Checking APIGroup: coordination.k8s.io +Oct 27 15:25:49.858: INFO: PreferredVersion.GroupVersion: coordination.k8s.io/v1 +Oct 27 15:25:49.858: INFO: Versions found [{coordination.k8s.io/v1 v1}] +Oct 27 15:25:49.858: INFO: coordination.k8s.io/v1 matches coordination.k8s.io/v1 +Oct 27 15:25:49.858: INFO: Checking APIGroup: node.k8s.io +Oct 27 15:25:49.862: INFO: PreferredVersion.GroupVersion: node.k8s.io/v1 +Oct 27 15:25:49.862: INFO: Versions found [{node.k8s.io/v1 v1} {node.k8s.io/v1beta1 v1beta1}] +Oct 27 15:25:49.862: INFO: node.k8s.io/v1 matches node.k8s.io/v1 +Oct 27 15:25:49.862: INFO: Checking APIGroup: discovery.k8s.io +Oct 27 15:25:49.865: INFO: PreferredVersion.GroupVersion: discovery.k8s.io/v1 +Oct 27 15:25:49.865: INFO: Versions found [{discovery.k8s.io/v1 v1} {discovery.k8s.io/v1beta1 v1beta1}] +Oct 27 15:25:49.865: INFO: discovery.k8s.io/v1 matches discovery.k8s.io/v1 +Oct 27 15:25:49.865: INFO: Checking APIGroup: flowcontrol.apiserver.k8s.io +Oct 27 15:25:49.868: INFO: PreferredVersion.GroupVersion: flowcontrol.apiserver.k8s.io/v1beta1 +Oct 27 15:25:49.868: INFO: Versions found [{flowcontrol.apiserver.k8s.io/v1beta1 v1beta1}] +Oct 27 15:25:49.868: INFO: flowcontrol.apiserver.k8s.io/v1beta1 matches flowcontrol.apiserver.k8s.io/v1beta1 +Oct 27 15:25:49.868: INFO: Checking APIGroup: autoscaling.k8s.io +Oct 27 15:25:49.871: INFO: PreferredVersion.GroupVersion: autoscaling.k8s.io/v1 +Oct 27 15:25:49.871: INFO: Versions found [{autoscaling.k8s.io/v1 v1} {autoscaling.k8s.io/v1beta2 v1beta2}] +Oct 27 15:25:49.871: INFO: autoscaling.k8s.io/v1 matches autoscaling.k8s.io/v1 +Oct 27 15:25:49.871: INFO: Checking APIGroup: crd.projectcalico.org +Oct 27 15:25:49.878: INFO: PreferredVersion.GroupVersion: crd.projectcalico.org/v1 +Oct 27 15:25:49.878: INFO: Versions found [{crd.projectcalico.org/v1 v1}] +Oct 27 15:25:49.878: INFO: crd.projectcalico.org/v1 matches crd.projectcalico.org/v1 +Oct 27 15:25:49.878: INFO: Checking APIGroup: cert.gardener.cloud +Oct 27 15:25:49.881: INFO: PreferredVersion.GroupVersion: cert.gardener.cloud/v1alpha1 +Oct 27 15:25:49.882: INFO: Versions found [{cert.gardener.cloud/v1alpha1 v1alpha1}] +Oct 27 15:25:49.882: INFO: cert.gardener.cloud/v1alpha1 matches cert.gardener.cloud/v1alpha1 +Oct 27 15:25:49.882: INFO: Checking APIGroup: dns.gardener.cloud +Oct 27 15:25:49.885: INFO: PreferredVersion.GroupVersion: dns.gardener.cloud/v1alpha1 +Oct 27 15:25:49.885: INFO: Versions found [{dns.gardener.cloud/v1alpha1 v1alpha1}] +Oct 27 15:25:49.885: INFO: dns.gardener.cloud/v1alpha1 matches dns.gardener.cloud/v1alpha1 +Oct 27 15:25:49.885: INFO: Checking APIGroup: snapshot.storage.k8s.io +Oct 27 15:25:49.888: INFO: PreferredVersion.GroupVersion: snapshot.storage.k8s.io/v1beta1 +Oct 27 15:25:49.888: INFO: Versions found [{snapshot.storage.k8s.io/v1beta1 v1beta1}] +Oct 27 15:25:49.888: INFO: snapshot.storage.k8s.io/v1beta1 matches snapshot.storage.k8s.io/v1beta1 +Oct 27 15:25:49.888: INFO: Checking APIGroup: metrics.k8s.io +Oct 27 15:25:49.891: INFO: PreferredVersion.GroupVersion: metrics.k8s.io/v1beta1 +Oct 27 15:25:49.891: INFO: Versions found [{metrics.k8s.io/v1beta1 v1beta1}] +Oct 27 15:25:49.891: INFO: metrics.k8s.io/v1beta1 matches metrics.k8s.io/v1beta1 +[AfterEach] [sig-api-machinery] Discovery + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:25:49.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "discovery-6394" for this suite. +•{"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":346,"completed":301,"skipped":5299,"failed":0} +SSSSSSSS +------------------------------ +[sig-api-machinery] Namespaces [Serial] + should patch a Namespace [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:25:49.904: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename namespaces +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in namespaces-3589 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should patch a Namespace [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a Namespace +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nspatchtest-56c1c240-54df-496c-9bd5-ffd918bf3e3c-9013 +STEP: patching the Namespace +STEP: get the Namespace and ensuring it has the label +[AfterEach] [sig-api-machinery] Namespaces [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:25:50.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "namespaces-3589" for this suite. +STEP: Destroying namespace "nspatchtest-56c1c240-54df-496c-9bd5-ffd918bf3e3c-9013" for this suite. +•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":346,"completed":302,"skipped":5307,"failed":0} +SSSS +------------------------------ +[sig-node] ConfigMap + should run through a ConfigMap lifecycle [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:25:50.227: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-1796 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should run through a ConfigMap lifecycle [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a ConfigMap +STEP: fetching the ConfigMap +STEP: patching the ConfigMap +STEP: listing all ConfigMaps in all namespaces with a label selector +STEP: deleting the ConfigMap by collection with a label selector +STEP: listing all ConfigMaps in test namespace +[AfterEach] [sig-node] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:25:50.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-1796" for this suite. +•{"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":346,"completed":303,"skipped":5311,"failed":0} +SSS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with projected pod [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:25:50.418: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename subpath +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in subpath-4050 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] Atomic writer volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 +STEP: Setting up data +[It] should support subpaths with projected pod [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating pod pod-subpath-test-projected-stc9 +STEP: Creating a pod to test atomic-volume-subpath +Oct 27 15:25:50.614: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-stc9" in namespace "subpath-4050" to be "Succeeded or Failed" +Oct 27 15:25:50.619: INFO: Pod "pod-subpath-test-projected-stc9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.457038ms +Oct 27 15:25:52.625: INFO: Pod "pod-subpath-test-projected-stc9": Phase="Running", Reason="", readiness=true. Elapsed: 2.011058206s +Oct 27 15:25:54.632: INFO: Pod "pod-subpath-test-projected-stc9": Phase="Running", Reason="", readiness=true. Elapsed: 4.017601977s +Oct 27 15:25:56.639: INFO: Pod "pod-subpath-test-projected-stc9": Phase="Running", Reason="", readiness=true. Elapsed: 6.024637684s +Oct 27 15:25:58.656: INFO: Pod "pod-subpath-test-projected-stc9": Phase="Running", Reason="", readiness=true. Elapsed: 8.041678918s +Oct 27 15:26:00.662: INFO: Pod "pod-subpath-test-projected-stc9": Phase="Running", Reason="", readiness=true. Elapsed: 10.047193369s +Oct 27 15:26:02.668: INFO: Pod "pod-subpath-test-projected-stc9": Phase="Running", Reason="", readiness=true. Elapsed: 12.053868017s +Oct 27 15:26:04.679: INFO: Pod "pod-subpath-test-projected-stc9": Phase="Running", Reason="", readiness=true. Elapsed: 14.064694854s +Oct 27 15:26:06.685: INFO: Pod "pod-subpath-test-projected-stc9": Phase="Running", Reason="", readiness=true. Elapsed: 16.070579829s +Oct 27 15:26:08.692: INFO: Pod "pod-subpath-test-projected-stc9": Phase="Running", Reason="", readiness=true. Elapsed: 18.077305733s +Oct 27 15:26:10.697: INFO: Pod "pod-subpath-test-projected-stc9": Phase="Running", Reason="", readiness=true. Elapsed: 20.082550494s +Oct 27 15:26:12.703: INFO: Pod "pod-subpath-test-projected-stc9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.088997031s +STEP: Saw pod success +Oct 27 15:26:12.703: INFO: Pod "pod-subpath-test-projected-stc9" satisfied condition "Succeeded or Failed" +Oct 27 15:26:12.708: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod pod-subpath-test-projected-stc9 container test-container-subpath-projected-stc9: +STEP: delete the pod +Oct 27 15:26:12.730: INFO: Waiting for pod pod-subpath-test-projected-stc9 to disappear +Oct 27 15:26:12.734: INFO: Pod pod-subpath-test-projected-stc9 no longer exists +STEP: Deleting pod pod-subpath-test-projected-stc9 +Oct 27 15:26:12.734: INFO: Deleting pod "pod-subpath-test-projected-stc9" in namespace "subpath-4050" +[AfterEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:26:12.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "subpath-4050" for this suite. +•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":346,"completed":304,"skipped":5314,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:26:12.752: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename statefulset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-7783 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:92 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:107 +STEP: Creating service test in namespace statefulset-7783 +[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Initializing watcher for selector baz=blah,foo=bar +STEP: Creating stateful set ss in namespace statefulset-7783 +STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-7783 +Oct 27 15:26:12.920: INFO: Found 0 stateful pods, waiting for 1 +Oct 27 15:26:22.926: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod +Oct 27 15:26:22.931: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-7783 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Oct 27 15:26:23.311: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Oct 27 15:26:23.311: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Oct 27 15:26:23.311: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Oct 27 15:26:23.316: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true +Oct 27 15:26:33.324: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false +Oct 27 15:26:33.324: INFO: Waiting for statefulset status.replicas updated to 0 +Oct 27 15:26:33.346: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999557s +Oct 27 15:26:34.352: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.995094929s +Oct 27 15:26:35.358: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.988349286s +Oct 27 15:26:36.364: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.98224663s +Oct 27 15:26:37.370: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.976563718s +Oct 27 15:26:38.376: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.970788663s +Oct 27 15:26:39.382: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.964640823s +Oct 27 15:26:40.388: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.957946984s +Oct 27 15:26:41.394: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.952573693s +Oct 27 15:26:42.400: INFO: Verifying statefulset ss doesn't scale past 1 for another 947.141404ms +STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-7783 +Oct 27 15:26:43.406: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-7783 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:26:43.738: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Oct 27 15:26:43.738: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Oct 27 15:26:43.738: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Oct 27 15:26:43.743: INFO: Found 1 stateful pods, waiting for 3 +Oct 27 15:26:53.756: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +Oct 27 15:26:53.756: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true +Oct 27 15:26:53.756: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true +STEP: Verifying that stateful set ss was scaled up in order +STEP: Scale down will halt with unhealthy stateful pod +Oct 27 15:26:53.767: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-7783 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Oct 27 15:26:54.064: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Oct 27 15:26:54.064: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Oct 27 15:26:54.064: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Oct 27 15:26:54.064: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-7783 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Oct 27 15:26:54.375: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Oct 27 15:26:54.375: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Oct 27 15:26:54.375: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Oct 27 15:26:54.375: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-7783 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Oct 27 15:26:54.669: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Oct 27 15:26:54.669: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Oct 27 15:26:54.669: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Oct 27 15:26:54.669: INFO: Waiting for statefulset status.replicas updated to 0 +Oct 27 15:26:54.675: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 +Oct 27 15:27:04.686: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false +Oct 27 15:27:04.686: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false +Oct 27 15:27:04.686: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false +Oct 27 15:27:04.702: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999641s +Oct 27 15:27:05.709: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.994210862s +Oct 27 15:27:06.716: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.98809833s +Oct 27 15:27:07.723: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.981225304s +Oct 27 15:27:08.729: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.974379927s +Oct 27 15:27:09.737: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.967791913s +Oct 27 15:27:10.744: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.959480418s +Oct 27 15:27:11.750: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.953545252s +Oct 27 15:27:12.756: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.947454392s +Oct 27 15:27:13.763: INFO: Verifying statefulset ss doesn't scale past 3 for another 940.657297ms +STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-7783 +Oct 27 15:27:14.770: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-7783 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:27:15.092: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Oct 27 15:27:15.092: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Oct 27 15:27:15.092: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Oct 27 15:27:15.092: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-7783 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:27:15.429: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Oct 27 15:27:15.429: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Oct 27 15:27:15.429: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Oct 27 15:27:15.430: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-7783 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:27:15.744: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Oct 27 15:27:15.744: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Oct 27 15:27:15.744: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Oct 27 15:27:15.744: INFO: Scaling statefulset ss to 0 +STEP: Verifying that stateful set ss was scaled down in reverse order +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118 +Oct 27 15:27:25.766: INFO: Deleting all statefulset in ns statefulset-7783 +Oct 27 15:27:25.771: INFO: Scaling statefulset ss to 0 +Oct 27 15:27:25.785: INFO: Waiting for statefulset status.replicas updated to 0 +Oct 27 15:27:25.789: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:27:25.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-7783" for this suite. +•{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":346,"completed":305,"skipped":5324,"failed":0} +S +------------------------------ +[sig-node] Events + should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Events + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:27:25.820: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename events +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in events-72 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pod +STEP: submitting the pod to kubernetes +STEP: verifying the pod is in kubernetes +STEP: retrieving the pod +Oct 27 15:27:27.998: INFO: &Pod{ObjectMeta:{send-events-ba300482-277b-4161-92c8-992c6c169c70 events-72 1c758d38-2c19-4047-bab1-08e673479069 40965 0 2021-10-27 15:27:25 +0000 UTC map[name:foo time:967454798] map[cni.projectcalico.org/containerID:3776bb8e00d7b7ffd52167858fea6c01066b2ece92b3b2788353f86f0e3ec1a7 cni.projectcalico.org/podIP:172.16.1.100/32 cni.projectcalico.org/podIPs:172.16.1.100/32 kubernetes.io/psp:e2e-test-privileged-psp] [] [] [{e2e.test Update v1 2021-10-27 15:27:25 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-27 15:27:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-27 15:27:27 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.16.1.100\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-jt6mm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:p,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmanu-jzf.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jt6mm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:izgw89f23rpcwrl79tpgp1z,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:27:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:27:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:27:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:27:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.8.35,PodIP:172.16.1.100,StartTime:2021-10-27 15:27:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-27 15:27:26 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:containerd://8a73d79296650d04607023df105769b1572bfbe9be5ee3eb2e90c7cc77a92285,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.16.1.100,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + +STEP: checking for scheduler event about the pod +Oct 27 15:27:30.005: INFO: Saw scheduler event for our pod. +STEP: checking for kubelet event about the pod +Oct 27 15:27:32.012: INFO: Saw kubelet event for our pod. +STEP: deleting the pod +[AfterEach] [sig-node] Events + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:27:32.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "events-72" for this suite. +•{"msg":"PASSED [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":346,"completed":306,"skipped":5325,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] HostPort + validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] HostPort + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:27:32.033: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename hostport +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in hostport-7933 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] HostPort + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/hostport.go:47 +[It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Trying to create a pod(pod1) with hostport 54323 and hostIP 127.0.0.1 and expect scheduled +Oct 27 15:27:32.203: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:27:34.209: INFO: The status of Pod pod1 is Running (Ready = true) +STEP: Trying to create another pod(pod2) with hostport 54323 but hostIP 10.250.8.34 on the node which pod1 resides and expect scheduled +Oct 27 15:27:34.224: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:27:36.230: INFO: The status of Pod pod2 is Running (Ready = false) +Oct 27 15:27:38.231: INFO: The status of Pod pod2 is Running (Ready = true) +STEP: Trying to create a third pod(pod3) with hostport 54323, hostIP 10.250.8.34 but use UDP protocol on the node which pod2 resides +Oct 27 15:27:38.248: INFO: The status of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:27:40.254: INFO: The status of Pod pod3 is Running (Ready = true) +Oct 27 15:27:40.268: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:27:42.275: INFO: The status of Pod e2e-host-exec is Running (Ready = true) +STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54323 +Oct 27 15:27:42.280: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 10.250.8.34 http://127.0.0.1:54323/hostname] Namespace:hostport-7933 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 15:27:42.280: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: checking connectivity from pod e2e-host-exec to serverIP: 10.250.8.34, port: 54323 +Oct 27 15:27:42.532: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://10.250.8.34:54323/hostname] Namespace:hostport-7933 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 15:27:42.532: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: checking connectivity from pod e2e-host-exec to serverIP: 10.250.8.34, port: 54323 UDP +Oct 27 15:27:42.788: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 10.250.8.34 54323] Namespace:hostport-7933 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 15:27:42.788: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +[AfterEach] [sig-network] HostPort + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:27:48.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "hostport-7933" for this suite. +•{"msg":"PASSED [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]","total":346,"completed":307,"skipped":5389,"failed":0} +S +------------------------------ +[sig-node] Pods + should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:27:48.024: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename pods +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-4441 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:188 +[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pod +STEP: submitting the pod to kubernetes +Oct 27 15:27:48.205: INFO: The status of Pod pod-update-activedeadlineseconds-ceb91fe5-a9ce-453f-9d8a-40ed0782ca79 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:27:50.211: INFO: The status of Pod pod-update-activedeadlineseconds-ceb91fe5-a9ce-453f-9d8a-40ed0782ca79 is Running (Ready = true) +STEP: verifying the pod is in kubernetes +STEP: updating the pod +Oct 27 15:27:50.736: INFO: Successfully updated pod "pod-update-activedeadlineseconds-ceb91fe5-a9ce-453f-9d8a-40ed0782ca79" +Oct 27 15:27:50.736: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-ceb91fe5-a9ce-453f-9d8a-40ed0782ca79" in namespace "pods-4441" to be "terminated due to deadline exceeded" +Oct 27 15:27:50.740: INFO: Pod "pod-update-activedeadlineseconds-ceb91fe5-a9ce-453f-9d8a-40ed0782ca79": Phase="Running", Reason="", readiness=true. Elapsed: 4.509635ms +Oct 27 15:27:52.747: INFO: Pod "pod-update-activedeadlineseconds-ceb91fe5-a9ce-453f-9d8a-40ed0782ca79": Phase="Running", Reason="", readiness=true. Elapsed: 2.011477273s +Oct 27 15:27:54.753: INFO: Pod "pod-update-activedeadlineseconds-ceb91fe5-a9ce-453f-9d8a-40ed0782ca79": Phase="Failed", Reason="DeadlineExceeded", readiness=true. Elapsed: 4.017012663s +Oct 27 15:27:54.753: INFO: Pod "pod-update-activedeadlineseconds-ceb91fe5-a9ce-453f-9d8a-40ed0782ca79" satisfied condition "terminated due to deadline exceeded" +[AfterEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:27:54.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-4441" for this suite. +•{"msg":"PASSED [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":346,"completed":308,"skipped":5390,"failed":0} +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide container's memory request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:27:54.766: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-7952 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should provide container's memory request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 27 15:27:54.927: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4de03c9b-94b2-4274-9de2-bcbfc2e4a173" in namespace "projected-7952" to be "Succeeded or Failed" +Oct 27 15:27:54.931: INFO: Pod "downwardapi-volume-4de03c9b-94b2-4274-9de2-bcbfc2e4a173": Phase="Pending", Reason="", readiness=false. Elapsed: 4.277013ms +Oct 27 15:27:56.937: INFO: Pod "downwardapi-volume-4de03c9b-94b2-4274-9de2-bcbfc2e4a173": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.01037131s +STEP: Saw pod success +Oct 27 15:27:56.938: INFO: Pod "downwardapi-volume-4de03c9b-94b2-4274-9de2-bcbfc2e4a173" satisfied condition "Succeeded or Failed" +Oct 27 15:27:56.942: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod downwardapi-volume-4de03c9b-94b2-4274-9de2-bcbfc2e4a173 container client-container: +STEP: delete the pod +Oct 27 15:27:57.023: INFO: Waiting for pod downwardapi-volume-4de03c9b-94b2-4274-9de2-bcbfc2e4a173 to disappear +Oct 27 15:27:57.027: INFO: Pod downwardapi-volume-4de03c9b-94b2-4274-9de2-bcbfc2e4a173 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:27:57.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-7952" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":346,"completed":309,"skipped":5412,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-scheduling] SchedulerPreemption [Serial] + validates basic preemption works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:27:57.041: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename sched-preemption +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-preemption-7135 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 +Oct 27 15:27:57.204: INFO: Waiting up to 1m0s for all nodes to be ready +Oct 27 15:28:57.255: INFO: Waiting for terminating namespaces to be deleted... +[It] validates basic preemption works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Create pods that use 4/5 of node resources. +Oct 27 15:28:57.285: INFO: Created pod: pod0-0-sched-preemption-low-priority +Oct 27 15:28:57.296: INFO: Created pod: pod0-1-sched-preemption-medium-priority +Oct 27 15:28:57.315: INFO: Created pod: pod1-0-sched-preemption-medium-priority +Oct 27 15:28:57.325: INFO: Created pod: pod1-1-sched-preemption-medium-priority +STEP: Wait for pods to be scheduled. +STEP: Run a high priority pod that has same requirements as that of lower priority pod +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:29:03.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-preemption-7135" for this suite. +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 +•{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":346,"completed":310,"skipped":5428,"failed":0} +SSSSSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + should list, patch and delete a collection of StatefulSets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:29:03.451: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename statefulset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-1809 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:92 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:107 +STEP: Creating service test in namespace statefulset-1809 +[It] should list, patch and delete a collection of StatefulSets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:29:03.613: INFO: Found 0 stateful pods, waiting for 1 +Oct 27 15:29:13.620: INFO: Waiting for pod test-ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: patching the StatefulSet +Oct 27 15:29:13.649: INFO: Found 1 stateful pods, waiting for 2 +Oct 27 15:29:23.659: INFO: Waiting for pod test-ss-0 to enter Running - Ready=true, currently Running - Ready=true +Oct 27 15:29:23.659: INFO: Waiting for pod test-ss-1 to enter Running - Ready=true, currently Running - Ready=true +STEP: Listing all StatefulSets +STEP: Delete all of the StatefulSets +STEP: Verify that StatefulSets have been deleted +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118 +Oct 27 15:29:23.686: INFO: Deleting all statefulset in ns statefulset-1809 +[AfterEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:29:23.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-1809" for this suite. +•{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should list, patch and delete a collection of StatefulSets [Conformance]","total":346,"completed":311,"skipped":5435,"failed":0} +S +------------------------------ +[sig-instrumentation] Events + should ensure that an event can be fetched, patched, deleted, and listed [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-instrumentation] Events + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:29:23.712: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename events +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in events-5941 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a test event +STEP: listing all events in all namespaces +STEP: patching the test event +STEP: fetching the test event +STEP: deleting the test event +STEP: listing all events in all namespaces +[AfterEach] [sig-instrumentation] Events + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:29:23.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "events-5941" for this suite. +•{"msg":"PASSED [sig-instrumentation] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":346,"completed":312,"skipped":5436,"failed":0} +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Container Runtime blackbox test when starting a container that exits + should run with the expected status [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:29:23.899: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-runtime +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-601 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should run with the expected status [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' +STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' +STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition +STEP: Container 'terminate-cmd-rpa': should get the expected 'State' +STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] +STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' +STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' +STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition +STEP: Container 'terminate-cmd-rpof': should get the expected 'State' +STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] +STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' +STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' +STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition +STEP: Container 'terminate-cmd-rpn': should get the expected 'State' +STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] +[AfterEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:29:49.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-runtime-601" for this suite. +•{"msg":"PASSED [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":346,"completed":313,"skipped":5454,"failed":0} +SSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should be able to deny pod and configmap creation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:29:49.394: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-6361 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 27 15:29:50.479: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 27 15:29:53.506: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should be able to deny pod and configmap creation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Registering the webhook via the AdmissionRegistration API +STEP: create a pod that should be denied by the webhook +STEP: create a pod that causes the webhook to hang +STEP: create a configmap that should be denied by the webhook +STEP: create a configmap that should be admitted by the webhook +STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook +STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook +STEP: create a namespace that bypass the webhook +STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:30:04.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-6361" for this suite. +STEP: Destroying namespace "webhook-6361-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":346,"completed":314,"skipped":5457,"failed":0} +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:30:04.046: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-1772 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating secret with name secret-test-dafed461-0c1a-41e5-aaed-1851d3d8bdfe +STEP: Creating a pod to test consume secrets +Oct 27 15:30:04.210: INFO: Waiting up to 5m0s for pod "pod-secrets-854623ad-1e5f-4566-9e77-26fb44301eef" in namespace "secrets-1772" to be "Succeeded or Failed" +Oct 27 15:30:04.214: INFO: Pod "pod-secrets-854623ad-1e5f-4566-9e77-26fb44301eef": Phase="Pending", Reason="", readiness=false. Elapsed: 4.301397ms +Oct 27 15:30:06.221: INFO: Pod "pod-secrets-854623ad-1e5f-4566-9e77-26fb44301eef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010423763s +STEP: Saw pod success +Oct 27 15:30:06.221: INFO: Pod "pod-secrets-854623ad-1e5f-4566-9e77-26fb44301eef" satisfied condition "Succeeded or Failed" +Oct 27 15:30:06.225: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod pod-secrets-854623ad-1e5f-4566-9e77-26fb44301eef container secret-volume-test: +STEP: delete the pod +Oct 27 15:30:06.244: INFO: Waiting for pod pod-secrets-854623ad-1e5f-4566-9e77-26fb44301eef to disappear +Oct 27 15:30:06.249: INFO: Pod pod-secrets-854623ad-1e5f-4566-9e77-26fb44301eef no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:30:06.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-1772" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":315,"skipped":5475,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Update Demo + should create and stop a replication controller [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:30:06.262: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-2317 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[BeforeEach] Update Demo + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:296 +[It] should create and stop a replication controller [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a replication controller +Oct 27 15:30:06.413: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-2317 create -f -' +Oct 27 15:30:06.641: INFO: stderr: "" +Oct 27 15:30:06.642: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" +STEP: waiting for all containers in name=update-demo pods to come up. +Oct 27 15:30:06.642: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-2317 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 27 15:30:06.724: INFO: stderr: "" +Oct 27 15:30:06.724: INFO: stdout: "update-demo-nautilus-nrz5f update-demo-nautilus-vhppj " +Oct 27 15:30:06.724: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-2317 get pods update-demo-nautilus-nrz5f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 27 15:30:06.800: INFO: stderr: "" +Oct 27 15:30:06.800: INFO: stdout: "" +Oct 27 15:30:06.800: INFO: update-demo-nautilus-nrz5f is created but not running +Oct 27 15:30:11.801: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-2317 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 27 15:30:11.869: INFO: stderr: "" +Oct 27 15:30:11.869: INFO: stdout: "update-demo-nautilus-nrz5f update-demo-nautilus-vhppj " +Oct 27 15:30:11.869: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-2317 get pods update-demo-nautilus-nrz5f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 27 15:30:11.938: INFO: stderr: "" +Oct 27 15:30:11.938: INFO: stdout: "true" +Oct 27 15:30:11.938: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-2317 get pods update-demo-nautilus-nrz5f -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Oct 27 15:30:12.004: INFO: stderr: "" +Oct 27 15:30:12.004: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" +Oct 27 15:30:12.004: INFO: validating pod update-demo-nautilus-nrz5f +Oct 27 15:30:12.066: INFO: got data: { + "image": "nautilus.jpg" +} + +Oct 27 15:30:12.066: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Oct 27 15:30:12.066: INFO: update-demo-nautilus-nrz5f is verified up and running +Oct 27 15:30:12.066: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-2317 get pods update-demo-nautilus-vhppj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 27 15:30:12.135: INFO: stderr: "" +Oct 27 15:30:12.135: INFO: stdout: "true" +Oct 27 15:30:12.135: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-2317 get pods update-demo-nautilus-vhppj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Oct 27 15:30:12.200: INFO: stderr: "" +Oct 27 15:30:12.200: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" +Oct 27 15:30:12.200: INFO: validating pod update-demo-nautilus-vhppj +Oct 27 15:30:12.217: INFO: got data: { + "image": "nautilus.jpg" +} + +Oct 27 15:30:12.217: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Oct 27 15:30:12.217: INFO: update-demo-nautilus-vhppj is verified up and running +STEP: using delete to clean up resources +Oct 27 15:30:12.217: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-2317 delete --grace-period=0 --force -f -' +Oct 27 15:30:12.288: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Oct 27 15:30:12.288: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" +Oct 27 15:30:12.288: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-2317 get rc,svc -l name=update-demo --no-headers' +Oct 27 15:30:12.362: INFO: stderr: "No resources found in kubectl-2317 namespace.\n" +Oct 27 15:30:12.362: INFO: stdout: "" +Oct 27 15:30:12.363: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-2317 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' +Oct 27 15:30:12.430: INFO: stderr: "" +Oct 27 15:30:12.430: INFO: stdout: "" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:30:12.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-2317" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":346,"completed":316,"skipped":5487,"failed":0} +S +------------------------------ +[sig-cli] Kubectl client Update Demo + should scale a replication controller [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:30:12.443: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-8789 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[BeforeEach] Update Demo + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:296 +[It] should scale a replication controller [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a replication controller +Oct 27 15:30:12.594: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-8789 create -f -' +Oct 27 15:30:12.762: INFO: stderr: "" +Oct 27 15:30:12.762: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" +STEP: waiting for all containers in name=update-demo pods to come up. +Oct 27 15:30:12.762: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-8789 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 27 15:30:12.839: INFO: stderr: "" +Oct 27 15:30:12.839: INFO: stdout: "update-demo-nautilus-69vx5 update-demo-nautilus-pjmsr " +Oct 27 15:30:12.840: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-8789 get pods update-demo-nautilus-69vx5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 27 15:30:12.905: INFO: stderr: "" +Oct 27 15:30:12.905: INFO: stdout: "" +Oct 27 15:30:12.905: INFO: update-demo-nautilus-69vx5 is created but not running +Oct 27 15:30:17.907: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-8789 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 27 15:30:17.983: INFO: stderr: "" +Oct 27 15:30:17.983: INFO: stdout: "update-demo-nautilus-69vx5 update-demo-nautilus-pjmsr " +Oct 27 15:30:17.983: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-8789 get pods update-demo-nautilus-69vx5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 27 15:30:18.049: INFO: stderr: "" +Oct 27 15:30:18.049: INFO: stdout: "true" +Oct 27 15:30:18.049: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-8789 get pods update-demo-nautilus-69vx5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Oct 27 15:30:18.114: INFO: stderr: "" +Oct 27 15:30:18.115: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" +Oct 27 15:30:18.115: INFO: validating pod update-demo-nautilus-69vx5 +Oct 27 15:30:18.174: INFO: got data: { + "image": "nautilus.jpg" +} + +Oct 27 15:30:18.174: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Oct 27 15:30:18.174: INFO: update-demo-nautilus-69vx5 is verified up and running +Oct 27 15:30:18.174: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-8789 get pods update-demo-nautilus-pjmsr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 27 15:30:18.238: INFO: stderr: "" +Oct 27 15:30:18.238: INFO: stdout: "true" +Oct 27 15:30:18.238: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-8789 get pods update-demo-nautilus-pjmsr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Oct 27 15:30:18.306: INFO: stderr: "" +Oct 27 15:30:18.306: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" +Oct 27 15:30:18.306: INFO: validating pod update-demo-nautilus-pjmsr +Oct 27 15:30:18.364: INFO: got data: { + "image": "nautilus.jpg" +} + +Oct 27 15:30:18.364: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Oct 27 15:30:18.364: INFO: update-demo-nautilus-pjmsr is verified up and running +STEP: scaling down the replication controller +Oct 27 15:30:18.366: INFO: scanned /root for discovery docs: +Oct 27 15:30:18.366: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-8789 scale rc update-demo-nautilus --replicas=1 --timeout=5m' +Oct 27 15:30:19.459: INFO: stderr: "" +Oct 27 15:30:19.459: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" +STEP: waiting for all containers in name=update-demo pods to come up. +Oct 27 15:30:19.459: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-8789 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 27 15:30:19.530: INFO: stderr: "" +Oct 27 15:30:19.530: INFO: stdout: "update-demo-nautilus-69vx5 update-demo-nautilus-pjmsr " +STEP: Replicas for name=update-demo: expected=1 actual=2 +Oct 27 15:30:24.531: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-8789 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 27 15:30:24.602: INFO: stderr: "" +Oct 27 15:30:24.602: INFO: stdout: "update-demo-nautilus-69vx5 " +Oct 27 15:30:24.602: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-8789 get pods update-demo-nautilus-69vx5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 27 15:30:24.670: INFO: stderr: "" +Oct 27 15:30:24.670: INFO: stdout: "true" +Oct 27 15:30:24.670: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-8789 get pods update-demo-nautilus-69vx5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Oct 27 15:30:24.745: INFO: stderr: "" +Oct 27 15:30:24.745: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" +Oct 27 15:30:24.745: INFO: validating pod update-demo-nautilus-69vx5 +Oct 27 15:30:24.755: INFO: got data: { + "image": "nautilus.jpg" +} + +Oct 27 15:30:24.755: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Oct 27 15:30:24.755: INFO: update-demo-nautilus-69vx5 is verified up and running +STEP: scaling up the replication controller +Oct 27 15:30:24.757: INFO: scanned /root for discovery docs: +Oct 27 15:30:24.757: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-8789 scale rc update-demo-nautilus --replicas=2 --timeout=5m' +Oct 27 15:30:25.853: INFO: stderr: "" +Oct 27 15:30:25.853: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" +STEP: waiting for all containers in name=update-demo pods to come up. +Oct 27 15:30:25.853: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-8789 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 27 15:30:25.924: INFO: stderr: "" +Oct 27 15:30:25.924: INFO: stdout: "update-demo-nautilus-69vx5 update-demo-nautilus-lq7wh " +Oct 27 15:30:25.924: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-8789 get pods update-demo-nautilus-69vx5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 27 15:30:25.989: INFO: stderr: "" +Oct 27 15:30:25.989: INFO: stdout: "true" +Oct 27 15:30:25.989: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-8789 get pods update-demo-nautilus-69vx5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Oct 27 15:30:26.053: INFO: stderr: "" +Oct 27 15:30:26.053: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" +Oct 27 15:30:26.053: INFO: validating pod update-demo-nautilus-69vx5 +Oct 27 15:30:26.106: INFO: got data: { + "image": "nautilus.jpg" +} + +Oct 27 15:30:26.106: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Oct 27 15:30:26.106: INFO: update-demo-nautilus-69vx5 is verified up and running +Oct 27 15:30:26.106: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-8789 get pods update-demo-nautilus-lq7wh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 27 15:30:26.174: INFO: stderr: "" +Oct 27 15:30:26.174: INFO: stdout: "true" +Oct 27 15:30:26.174: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-8789 get pods update-demo-nautilus-lq7wh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Oct 27 15:30:26.249: INFO: stderr: "" +Oct 27 15:30:26.249: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" +Oct 27 15:30:26.249: INFO: validating pod update-demo-nautilus-lq7wh +Oct 27 15:30:26.309: INFO: got data: { + "image": "nautilus.jpg" +} + +Oct 27 15:30:26.309: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Oct 27 15:30:26.309: INFO: update-demo-nautilus-lq7wh is verified up and running +STEP: using delete to clean up resources +Oct 27 15:30:26.309: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-8789 delete --grace-period=0 --force -f -' +Oct 27 15:30:26.380: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Oct 27 15:30:26.380: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" +Oct 27 15:30:26.380: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-8789 get rc,svc -l name=update-demo --no-headers' +Oct 27 15:30:26.454: INFO: stderr: "No resources found in kubectl-8789 namespace.\n" +Oct 27 15:30:26.454: INFO: stdout: "" +Oct 27 15:30:26.454: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-8789 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' +Oct 27 15:30:26.525: INFO: stderr: "" +Oct 27 15:30:26.525: INFO: stdout: "" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:30:26.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-8789" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":346,"completed":317,"skipped":5488,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should mutate configmap [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:30:26.539: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-5381 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 27 15:30:27.129: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +Oct 27 15:30:29.144: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770945427, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770945427, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770945427, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770945427, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 27 15:30:32.161: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should mutate configmap [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Registering the mutating configmap webhook via the AdmissionRegistration API +STEP: create a configmap that should be updated by the webhook +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:30:32.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-5381" for this suite. +STEP: Destroying namespace "webhook-5381-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":346,"completed":318,"skipped":5516,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for CRD preserving unknown fields in an embedded object [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:30:32.417: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-6969 +STEP: Waiting for a default service account to be provisioned in namespace +[It] works for CRD preserving unknown fields in an embedded object [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:30:32.564: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: client-side validation (kubectl create and apply) allows request with any unknown properties +Oct 27 15:30:36.111: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-6969 --namespace=crd-publish-openapi-6969 create -f -' +Oct 27 15:30:36.873: INFO: stderr: "" +Oct 27 15:30:36.873: INFO: stdout: "e2e-test-crd-publish-openapi-5936-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" +Oct 27 15:30:36.873: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-6969 --namespace=crd-publish-openapi-6969 delete e2e-test-crd-publish-openapi-5936-crds test-cr' +Oct 27 15:30:36.948: INFO: stderr: "" +Oct 27 15:30:36.948: INFO: stdout: "e2e-test-crd-publish-openapi-5936-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" +Oct 27 15:30:36.948: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-6969 --namespace=crd-publish-openapi-6969 apply -f -' +Oct 27 15:30:37.127: INFO: stderr: "" +Oct 27 15:30:37.127: INFO: stdout: "e2e-test-crd-publish-openapi-5936-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" +Oct 27 15:30:37.127: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-6969 --namespace=crd-publish-openapi-6969 delete e2e-test-crd-publish-openapi-5936-crds test-cr' +Oct 27 15:30:37.204: INFO: stderr: "" +Oct 27 15:30:37.204: INFO: stdout: "e2e-test-crd-publish-openapi-5936-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" +STEP: kubectl explain works to explain CR +Oct 27 15:30:37.204: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-6969 explain e2e-test-crd-publish-openapi-5936-crds' +Oct 27 15:30:37.364: INFO: stderr: "" +Oct 27 15:30:37.364: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-5936-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t<>\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:30:40.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-6969" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":346,"completed":319,"skipped":5532,"failed":0} +SS +------------------------------ +[sig-node] Probing container + should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:30:40.895: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-probe +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-7266 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 +[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating pod busybox-c68501a9-6773-4c91-9a40-0fd70eeb0446 in namespace container-probe-7266 +Oct 27 15:30:43.073: INFO: Started pod busybox-c68501a9-6773-4c91-9a40-0fd70eeb0446 in namespace container-probe-7266 +STEP: checking the pod's current state and verifying that restartCount is present +Oct 27 15:30:43.077: INFO: Initial restart count of pod busybox-c68501a9-6773-4c91-9a40-0fd70eeb0446 is 0 +Oct 27 15:31:33.252: INFO: Restart count of pod container-probe-7266/busybox-c68501a9-6773-4c91-9a40-0fd70eeb0446 is now 1 (50.174368505s elapsed) +STEP: deleting the pod +[AfterEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:31:33.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-7266" for this suite. +•{"msg":"PASSED [sig-node] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":346,"completed":320,"skipped":5534,"failed":0} +SSSSSS +------------------------------ +[sig-node] Variable Expansion + should allow substituting values in a volume subpath [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:31:33.272: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename var-expansion +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in var-expansion-6545 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should allow substituting values in a volume subpath [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test substitution in volume subpath +Oct 27 15:31:33.434: INFO: Waiting up to 5m0s for pod "var-expansion-85cb0f52-c124-4305-ae97-60967601a8fb" in namespace "var-expansion-6545" to be "Succeeded or Failed" +Oct 27 15:31:33.439: INFO: Pod "var-expansion-85cb0f52-c124-4305-ae97-60967601a8fb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.718013ms +Oct 27 15:31:35.446: INFO: Pod "var-expansion-85cb0f52-c124-4305-ae97-60967601a8fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011364548s +STEP: Saw pod success +Oct 27 15:31:35.446: INFO: Pod "var-expansion-85cb0f52-c124-4305-ae97-60967601a8fb" satisfied condition "Succeeded or Failed" +Oct 27 15:31:35.451: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod var-expansion-85cb0f52-c124-4305-ae97-60967601a8fb container dapi-container: +STEP: delete the pod +Oct 27 15:31:35.475: INFO: Waiting for pod var-expansion-85cb0f52-c124-4305-ae97-60967601a8fb to disappear +Oct 27 15:31:35.479: INFO: Pod var-expansion-85cb0f52-c124-4305-ae97-60967601a8fb no longer exists +[AfterEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:31:35.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-6545" for this suite. +•{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]","total":346,"completed":321,"skipped":5540,"failed":0} +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Docker Containers + should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Docker Containers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:31:35.493: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename containers +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in containers-8014 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test override arguments +Oct 27 15:31:35.784: INFO: Waiting up to 5m0s for pod "client-containers-51899ba7-beaf-4285-9db6-99115b9d838d" in namespace "containers-8014" to be "Succeeded or Failed" +Oct 27 15:31:35.789: INFO: Pod "client-containers-51899ba7-beaf-4285-9db6-99115b9d838d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.790608ms +Oct 27 15:31:37.796: INFO: Pod "client-containers-51899ba7-beaf-4285-9db6-99115b9d838d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011523246s +STEP: Saw pod success +Oct 27 15:31:37.796: INFO: Pod "client-containers-51899ba7-beaf-4285-9db6-99115b9d838d" satisfied condition "Succeeded or Failed" +Oct 27 15:31:37.800: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod client-containers-51899ba7-beaf-4285-9db6-99115b9d838d container agnhost-container: +STEP: delete the pod +Oct 27 15:31:37.820: INFO: Waiting for pod client-containers-51899ba7-beaf-4285-9db6-99115b9d838d to disappear +Oct 27 15:31:37.824: INFO: Pod client-containers-51899ba7-beaf-4285-9db6-99115b9d838d no longer exists +[AfterEach] [sig-node] Docker Containers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:31:37.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "containers-8014" for this suite. +•{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":346,"completed":322,"skipped":5560,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Variable Expansion + should allow substituting values in a container's args [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:31:37.838: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename var-expansion +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in var-expansion-7372 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should allow substituting values in a container's args [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test substitution in container's args +Oct 27 15:31:38.000: INFO: Waiting up to 5m0s for pod "var-expansion-7aab53a7-763a-48ab-acdd-94f1c399d8c9" in namespace "var-expansion-7372" to be "Succeeded or Failed" +Oct 27 15:31:38.005: INFO: Pod "var-expansion-7aab53a7-763a-48ab-acdd-94f1c399d8c9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.344673ms +Oct 27 15:31:40.010: INFO: Pod "var-expansion-7aab53a7-763a-48ab-acdd-94f1c399d8c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010107415s +STEP: Saw pod success +Oct 27 15:31:40.011: INFO: Pod "var-expansion-7aab53a7-763a-48ab-acdd-94f1c399d8c9" satisfied condition "Succeeded or Failed" +Oct 27 15:31:40.015: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod var-expansion-7aab53a7-763a-48ab-acdd-94f1c399d8c9 container dapi-container: +STEP: delete the pod +Oct 27 15:31:40.034: INFO: Waiting for pod var-expansion-7aab53a7-763a-48ab-acdd-94f1c399d8c9 to disappear +Oct 27 15:31:40.038: INFO: Pod var-expansion-7aab53a7-763a-48ab-acdd-94f1c399d8c9 no longer exists +[AfterEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:31:40.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-7372" for this suite. +•{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":346,"completed":323,"skipped":5590,"failed":0} +S +------------------------------ +[sig-auth] ServiceAccounts + should allow opting out of API token automount [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:31:40.052: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename svcaccounts +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in svcaccounts-3006 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should allow opting out of API token automount [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: getting the auto-created API token +Oct 27 15:31:40.730: INFO: created pod pod-service-account-defaultsa +Oct 27 15:31:40.731: INFO: pod pod-service-account-defaultsa service account token volume mount: true +Oct 27 15:31:40.739: INFO: created pod pod-service-account-mountsa +Oct 27 15:31:40.740: INFO: pod pod-service-account-mountsa service account token volume mount: true +Oct 27 15:31:40.760: INFO: created pod pod-service-account-nomountsa +Oct 27 15:31:40.760: INFO: pod pod-service-account-nomountsa service account token volume mount: false +Oct 27 15:31:40.769: INFO: created pod pod-service-account-defaultsa-mountspec +Oct 27 15:31:40.769: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true +Oct 27 15:31:40.778: INFO: created pod pod-service-account-mountsa-mountspec +Oct 27 15:31:40.778: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true +Oct 27 15:31:40.787: INFO: created pod pod-service-account-nomountsa-mountspec +Oct 27 15:31:40.787: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true +Oct 27 15:31:40.795: INFO: created pod pod-service-account-defaultsa-nomountspec +Oct 27 15:31:40.795: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false +Oct 27 15:31:40.805: INFO: created pod pod-service-account-mountsa-nomountspec +Oct 27 15:31:40.805: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false +Oct 27 15:31:40.863: INFO: created pod pod-service-account-nomountsa-nomountspec +Oct 27 15:31:40.863: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false +[AfterEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:31:40.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svcaccounts-3006" for this suite. +•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":346,"completed":324,"skipped":5591,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] DNS + should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:31:40.882: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename dns +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-4722 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a test headless service +STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4722 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4722;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4722 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4722;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4722.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4722.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4722.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4722.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4722.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-4722.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4722.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-4722.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4722.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-4722.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4722.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-4722.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4722.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 108.117.25.172.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/172.25.117.108_udp@PTR;check="$$(dig +tcp +noall +answer +search 108.117.25.172.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/172.25.117.108_tcp@PTR;sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4722 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4722;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4722 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4722;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4722.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4722.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4722.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4722.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4722.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-4722.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4722.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-4722.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4722.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-4722.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4722.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-4722.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4722.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 108.117.25.172.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/172.25.117.108_udp@PTR;check="$$(dig +tcp +noall +answer +search 108.117.25.172.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/172.25.117.108_tcp@PTR;sleep 1; done + +STEP: creating a pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Oct 27 15:31:43.247: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38: the server could not find the requested resource (get pods dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38) +Oct 27 15:31:43.256: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38: the server could not find the requested resource (get pods dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38) +Oct 27 15:31:43.266: INFO: Unable to read wheezy_udp@dns-test-service.dns-4722 from pod dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38: the server could not find the requested resource (get pods dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38) +Oct 27 15:31:43.354: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4722 from pod dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38: the server could not find the requested resource (get pods dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38) +Oct 27 15:31:43.369: INFO: Unable to read wheezy_udp@dns-test-service.dns-4722.svc from pod dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38: the server could not find the requested resource (get pods dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38) +Oct 27 15:31:43.377: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4722.svc from pod dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38: the server could not find the requested resource (get pods dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38) +Oct 27 15:31:43.486: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38: the server could not find the requested resource (get pods dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38) +Oct 27 15:31:43.494: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38: the server could not find the requested resource (get pods dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38) +Oct 27 15:31:43.501: INFO: Unable to read jessie_udp@dns-test-service.dns-4722 from pod dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38: the server could not find the requested resource (get pods dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38) +Oct 27 15:31:43.509: INFO: Unable to read jessie_tcp@dns-test-service.dns-4722 from pod dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38: the server could not find the requested resource (get pods dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38) +Oct 27 15:31:43.516: INFO: Unable to read jessie_udp@dns-test-service.dns-4722.svc from pod dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38: the server could not find the requested resource (get pods dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38) +Oct 27 15:31:43.523: INFO: Unable to read jessie_tcp@dns-test-service.dns-4722.svc from pod dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38: the server could not find the requested resource (get pods dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38) +Oct 27 15:31:43.582: INFO: Lookups using dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4722 wheezy_tcp@dns-test-service.dns-4722 wheezy_udp@dns-test-service.dns-4722.svc wheezy_tcp@dns-test-service.dns-4722.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4722 jessie_tcp@dns-test-service.dns-4722 jessie_udp@dns-test-service.dns-4722.svc jessie_tcp@dns-test-service.dns-4722.svc] + +Oct 27 15:31:48.591: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38: the server could not find the requested resource (get pods dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38) +Oct 27 15:31:48.598: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38: the server could not find the requested resource (get pods dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38) +Oct 27 15:31:48.606: INFO: Unable to read wheezy_udp@dns-test-service.dns-4722 from pod dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38: the server could not find the requested resource (get pods dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38) +Oct 27 15:31:48.651: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4722 from pod dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38: the server could not find the requested resource (get pods dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38) +Oct 27 15:31:48.659: INFO: Unable to read wheezy_udp@dns-test-service.dns-4722.svc from pod dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38: the server could not find the requested resource (get pods dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38) +Oct 27 15:31:48.666: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4722.svc from pod dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38: the server could not find the requested resource (get pods dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38) +Oct 27 15:31:48.730: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38: the server could not find the requested resource (get pods dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38) +Oct 27 15:31:48.738: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38: the server could not find the requested resource (get pods dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38) +Oct 27 15:31:48.745: INFO: Unable to read jessie_udp@dns-test-service.dns-4722 from pod dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38: the server could not find the requested resource (get pods dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38) +Oct 27 15:31:48.752: INFO: Unable to read jessie_tcp@dns-test-service.dns-4722 from pod dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38: the server could not find the requested resource (get pods dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38) +Oct 27 15:31:48.759: INFO: Unable to read jessie_udp@dns-test-service.dns-4722.svc from pod dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38: the server could not find the requested resource (get pods dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38) +Oct 27 15:31:48.766: INFO: Unable to read jessie_tcp@dns-test-service.dns-4722.svc from pod dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38: the server could not find the requested resource (get pods dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38) +Oct 27 15:31:48.822: INFO: Lookups using dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4722 wheezy_tcp@dns-test-service.dns-4722 wheezy_udp@dns-test-service.dns-4722.svc wheezy_tcp@dns-test-service.dns-4722.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4722 jessie_tcp@dns-test-service.dns-4722 jessie_udp@dns-test-service.dns-4722.svc jessie_tcp@dns-test-service.dns-4722.svc] + +Oct 27 15:31:53.594: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38: the server could not find the requested resource (get pods dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38) +Oct 27 15:31:53.639: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38: the server could not find the requested resource (get pods dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38) +Oct 27 15:31:53.647: INFO: Unable to read wheezy_udp@dns-test-service.dns-4722 from pod dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38: the server could not find the requested resource (get pods dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38) +Oct 27 15:31:53.655: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4722 from pod dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38: the server could not find the requested resource (get pods dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38) +Oct 27 15:31:53.699: INFO: Unable to read wheezy_udp@dns-test-service.dns-4722.svc from pod dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38: the server could not find the requested resource (get pods dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38) +Oct 27 15:31:53.707: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4722.svc from pod dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38: the server could not find the requested resource (get pods dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38) +Oct 27 15:31:53.778: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38: the server could not find the requested resource (get pods dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38) +Oct 27 15:31:53.785: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38: the server could not find the requested resource (get pods dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38) +Oct 27 15:31:53.793: INFO: Unable to read jessie_udp@dns-test-service.dns-4722 from pod dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38: the server could not find the requested resource (get pods dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38) +Oct 27 15:31:53.800: INFO: Unable to read jessie_tcp@dns-test-service.dns-4722 from pod dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38: the server could not find the requested resource (get pods dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38) +Oct 27 15:31:53.807: INFO: Unable to read jessie_udp@dns-test-service.dns-4722.svc from pod dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38: the server could not find the requested resource (get pods dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38) +Oct 27 15:31:53.815: INFO: Unable to read jessie_tcp@dns-test-service.dns-4722.svc from pod dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38: the server could not find the requested resource (get pods dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38) +Oct 27 15:31:53.874: INFO: Lookups using dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4722 wheezy_tcp@dns-test-service.dns-4722 wheezy_udp@dns-test-service.dns-4722.svc wheezy_tcp@dns-test-service.dns-4722.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4722 jessie_tcp@dns-test-service.dns-4722 jessie_udp@dns-test-service.dns-4722.svc jessie_tcp@dns-test-service.dns-4722.svc] + +Oct 27 15:31:58.591: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38: the server could not find the requested resource (get pods dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38) +Oct 27 15:31:58.598: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38: the server could not find the requested resource (get pods dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38) +Oct 27 15:31:58.606: INFO: Unable to read wheezy_udp@dns-test-service.dns-4722 from pod dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38: the server could not find the requested resource (get pods dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38) +Oct 27 15:31:58.651: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4722 from pod dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38: the server could not find the requested resource (get pods dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38) +Oct 27 15:31:58.658: INFO: Unable to read wheezy_udp@dns-test-service.dns-4722.svc from pod dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38: the server could not find the requested resource (get pods dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38) +Oct 27 15:31:58.666: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4722.svc from pod dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38: the server could not find the requested resource (get pods dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38) +Oct 27 15:31:58.734: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38: the server could not find the requested resource (get pods dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38) +Oct 27 15:31:58.742: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38: the server could not find the requested resource (get pods dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38) +Oct 27 15:31:58.749: INFO: Unable to read jessie_udp@dns-test-service.dns-4722 from pod dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38: the server could not find the requested resource (get pods dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38) +Oct 27 15:31:58.757: INFO: Unable to read jessie_tcp@dns-test-service.dns-4722 from pod dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38: the server could not find the requested resource (get pods dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38) +Oct 27 15:31:58.765: INFO: Unable to read jessie_udp@dns-test-service.dns-4722.svc from pod dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38: the server could not find the requested resource (get pods dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38) +Oct 27 15:31:58.772: INFO: Unable to read jessie_tcp@dns-test-service.dns-4722.svc from pod dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38: the server could not find the requested resource (get pods dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38) +Oct 27 15:31:58.832: INFO: Lookups using dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4722 wheezy_tcp@dns-test-service.dns-4722 wheezy_udp@dns-test-service.dns-4722.svc wheezy_tcp@dns-test-service.dns-4722.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4722 jessie_tcp@dns-test-service.dns-4722 jessie_udp@dns-test-service.dns-4722.svc jessie_tcp@dns-test-service.dns-4722.svc] + +Oct 27 15:32:03.594: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38: the server could not find the requested resource (get pods dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38) +Oct 27 15:32:03.602: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38: the server could not find the requested resource (get pods dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38) +Oct 27 15:32:03.610: INFO: Unable to read wheezy_udp@dns-test-service.dns-4722 from pod dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38: the server could not find the requested resource (get pods dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38) +Oct 27 15:32:03.655: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4722 from pod dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38: the server could not find the requested resource (get pods dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38) +Oct 27 15:32:03.663: INFO: Unable to read wheezy_udp@dns-test-service.dns-4722.svc from pod dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38: the server could not find the requested resource (get pods dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38) +Oct 27 15:32:03.671: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4722.svc from pod dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38: the server could not find the requested resource (get pods dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38) +Oct 27 15:32:03.741: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38: the server could not find the requested resource (get pods dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38) +Oct 27 15:32:03.749: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38: the server could not find the requested resource (get pods dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38) +Oct 27 15:32:03.756: INFO: Unable to read jessie_udp@dns-test-service.dns-4722 from pod dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38: the server could not find the requested resource (get pods dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38) +Oct 27 15:32:03.770: INFO: Unable to read jessie_tcp@dns-test-service.dns-4722 from pod dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38: the server could not find the requested resource (get pods dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38) +Oct 27 15:32:03.779: INFO: Unable to read jessie_udp@dns-test-service.dns-4722.svc from pod dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38: the server could not find the requested resource (get pods dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38) +Oct 27 15:32:03.787: INFO: Unable to read jessie_tcp@dns-test-service.dns-4722.svc from pod dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38: the server could not find the requested resource (get pods dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38) +Oct 27 15:32:03.848: INFO: Lookups using dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4722 wheezy_tcp@dns-test-service.dns-4722 wheezy_udp@dns-test-service.dns-4722.svc wheezy_tcp@dns-test-service.dns-4722.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4722 jessie_tcp@dns-test-service.dns-4722 jessie_udp@dns-test-service.dns-4722.svc jessie_tcp@dns-test-service.dns-4722.svc] + +Oct 27 15:32:08.590: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38: the server could not find the requested resource (get pods dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38) +Oct 27 15:32:08.598: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38: the server could not find the requested resource (get pods dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38) +Oct 27 15:32:08.643: INFO: Unable to read wheezy_udp@dns-test-service.dns-4722 from pod dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38: the server could not find the requested resource (get pods dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38) +Oct 27 15:32:08.651: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4722 from pod dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38: the server could not find the requested resource (get pods dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38) +Oct 27 15:32:08.658: INFO: Unable to read wheezy_udp@dns-test-service.dns-4722.svc from pod dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38: the server could not find the requested resource (get pods dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38) +Oct 27 15:32:08.666: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4722.svc from pod dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38: the server could not find the requested resource (get pods dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38) +Oct 27 15:32:08.745: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38: the server could not find the requested resource (get pods dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38) +Oct 27 15:32:08.752: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38: the server could not find the requested resource (get pods dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38) +Oct 27 15:32:08.759: INFO: Unable to read jessie_udp@dns-test-service.dns-4722 from pod dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38: the server could not find the requested resource (get pods dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38) +Oct 27 15:32:08.773: INFO: Unable to read jessie_tcp@dns-test-service.dns-4722 from pod dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38: the server could not find the requested resource (get pods dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38) +Oct 27 15:32:08.780: INFO: Unable to read jessie_udp@dns-test-service.dns-4722.svc from pod dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38: the server could not find the requested resource (get pods dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38) +Oct 27 15:32:08.787: INFO: Unable to read jessie_tcp@dns-test-service.dns-4722.svc from pod dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38: the server could not find the requested resource (get pods dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38) +Oct 27 15:32:08.846: INFO: Lookups using dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4722 wheezy_tcp@dns-test-service.dns-4722 wheezy_udp@dns-test-service.dns-4722.svc wheezy_tcp@dns-test-service.dns-4722.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4722 jessie_tcp@dns-test-service.dns-4722 jessie_udp@dns-test-service.dns-4722.svc jessie_tcp@dns-test-service.dns-4722.svc] + +Oct 27 15:32:13.842: INFO: DNS probes using dns-4722/dns-test-53be49f3-7a80-4f96-b55a-fdae55036d38 succeeded + +STEP: deleting the pod +STEP: deleting the test service +STEP: deleting the test headless service +[AfterEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:32:13.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-4722" for this suite. +•{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":346,"completed":325,"skipped":5619,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] InitContainer [NodeConformance] + should invoke init containers on a RestartAlways pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:32:13.885: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename init-container +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in init-container-9549 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 +[It] should invoke init containers on a RestartAlways pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pod +Oct 27 15:32:14.031: INFO: PodSpec: initContainers in spec.initContainers +[AfterEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:32:17.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "init-container-9549" for this suite. +•{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":346,"completed":326,"skipped":5729,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should mutate pod and apply defaults after mutation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:32:17.123: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-5274 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 27 15:32:17.649: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 27 15:32:20.680: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should mutate pod and apply defaults after mutation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Registering the mutating pod webhook via the AdmissionRegistration API +STEP: create a pod that should be updated by the webhook +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:32:20.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-5274" for this suite. +STEP: Destroying namespace "webhook-5274-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":346,"completed":327,"skipped":5752,"failed":0} +SSSSSSS +------------------------------ +[sig-auth] Certificates API [Privileged:ClusterAdmin] + should support CSR API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:32:20.983: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename certificates +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in certificates-6265 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support CSR API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: getting /apis +STEP: getting /apis/certificates.k8s.io +STEP: getting /apis/certificates.k8s.io/v1 +STEP: creating +STEP: getting +STEP: listing +STEP: watching +Oct 27 15:32:21.842: INFO: starting watch +STEP: patching +STEP: updating +Oct 27 15:32:21.856: INFO: waiting for watch events with expected annotations +Oct 27 15:32:21.856: INFO: saw patched and updated annotations +STEP: getting /approval +STEP: patching /approval +STEP: updating /approval +STEP: getting /status +STEP: patching /status +STEP: updating /status +STEP: deleting +STEP: deleting a collection +[AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:32:21.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "certificates-6265" for this suite. +•{"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":346,"completed":328,"skipped":5759,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:32:21.929: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-7887 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name configmap-test-volume-38c82368-da0c-4027-ab10-c69102777f78 +STEP: Creating a pod to test consume configMaps +Oct 27 15:32:22.096: INFO: Waiting up to 5m0s for pod "pod-configmaps-d72a79e8-76da-4f21-a104-8cdbdd5cf5af" in namespace "configmap-7887" to be "Succeeded or Failed" +Oct 27 15:32:22.100: INFO: Pod "pod-configmaps-d72a79e8-76da-4f21-a104-8cdbdd5cf5af": Phase="Pending", Reason="", readiness=false. Elapsed: 4.158851ms +Oct 27 15:32:24.106: INFO: Pod "pod-configmaps-d72a79e8-76da-4f21-a104-8cdbdd5cf5af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010531697s +STEP: Saw pod success +Oct 27 15:32:24.106: INFO: Pod "pod-configmaps-d72a79e8-76da-4f21-a104-8cdbdd5cf5af" satisfied condition "Succeeded or Failed" +Oct 27 15:32:24.111: INFO: Trying to get logs from node izgw81stpxs0bun38i01tfz pod pod-configmaps-d72a79e8-76da-4f21-a104-8cdbdd5cf5af container agnhost-container: +STEP: delete the pod +Oct 27 15:32:24.138: INFO: Waiting for pod pod-configmaps-d72a79e8-76da-4f21-a104-8cdbdd5cf5af to disappear +Oct 27 15:32:24.143: INFO: Pod pod-configmaps-d72a79e8-76da-4f21-a104-8cdbdd5cf5af no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:32:24.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-7887" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":346,"completed":329,"skipped":5795,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] EndpointSlice + should have Endpoints and EndpointSlices pointing to API Server [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:32:24.157: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename endpointslice +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in endpointslice-9236 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 +[It] should have Endpoints and EndpointSlices pointing to API Server [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:32:24.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "endpointslice-9236" for this suite. +•{"msg":"PASSED [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","total":346,"completed":330,"skipped":5898,"failed":0} +SSSSSS +------------------------------ +[sig-apps] ReplicaSet + Replace and Patch tests [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:32:24.327: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename replicaset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replicaset-640 +STEP: Waiting for a default service account to be provisioned in namespace +[It] Replace and Patch tests [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:32:24.495: INFO: Pod name sample-pod: Found 0 pods out of 1 +Oct 27 15:32:29.500: INFO: Pod name sample-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running +STEP: Scaling up "test-rs" replicaset +Oct 27 15:32:29.509: INFO: Updating replica set "test-rs" +STEP: patching the ReplicaSet +Oct 27 15:32:29.519: INFO: observed ReplicaSet test-rs in namespace replicaset-640 with ReadyReplicas 1, AvailableReplicas 1 +Oct 27 15:32:29.523: INFO: observed ReplicaSet test-rs in namespace replicaset-640 with ReadyReplicas 1, AvailableReplicas 1 +Oct 27 15:32:29.561: INFO: observed ReplicaSet test-rs in namespace replicaset-640 with ReadyReplicas 1, AvailableReplicas 1 +Oct 27 15:32:29.564: INFO: observed ReplicaSet test-rs in namespace replicaset-640 with ReadyReplicas 1, AvailableReplicas 1 +Oct 27 15:32:31.017: INFO: observed ReplicaSet test-rs in namespace replicaset-640 with ReadyReplicas 2, AvailableReplicas 2 +Oct 27 15:32:31.147: INFO: observed Replicaset test-rs in namespace replicaset-640 with ReadyReplicas 3 found true +[AfterEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:32:31.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replicaset-640" for this suite. +•{"msg":"PASSED [sig-apps] ReplicaSet Replace and Patch tests [Conformance]","total":346,"completed":331,"skipped":5904,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide container's memory limit [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:32:31.160: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-1823 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should provide container's memory limit [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 27 15:32:31.318: INFO: Waiting up to 5m0s for pod "downwardapi-volume-58091e45-efd1-4564-89c6-bade2850835a" in namespace "projected-1823" to be "Succeeded or Failed" +Oct 27 15:32:31.323: INFO: Pod "downwardapi-volume-58091e45-efd1-4564-89c6-bade2850835a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.606812ms +Oct 27 15:32:33.339: INFO: Pod "downwardapi-volume-58091e45-efd1-4564-89c6-bade2850835a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.021184944s +STEP: Saw pod success +Oct 27 15:32:33.339: INFO: Pod "downwardapi-volume-58091e45-efd1-4564-89c6-bade2850835a" satisfied condition "Succeeded or Failed" +Oct 27 15:32:33.344: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod downwardapi-volume-58091e45-efd1-4564-89c6-bade2850835a container client-container: +STEP: delete the pod +Oct 27 15:32:33.363: INFO: Waiting for pod downwardapi-volume-58091e45-efd1-4564-89c6-bade2850835a to disappear +Oct 27 15:32:33.368: INFO: Pod downwardapi-volume-58091e45-efd1-4564-89c6-bade2850835a no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:32:33.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-1823" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":346,"completed":332,"skipped":5914,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Deployment + Deployment should have a working scale subresource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:32:33.382: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename deployment +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in deployment-9358 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 +[It] Deployment should have a working scale subresource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:32:33.530: INFO: Creating simple deployment test-new-deployment +Oct 27 15:32:33.545: INFO: deployment "test-new-deployment" doesn't have the required revision set +STEP: getting scale subresource +STEP: updating a scale subresource +STEP: verifying the deployment Spec.Replicas was modified +STEP: Patch a scale subresource +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 +Oct 27 15:32:35.596: INFO: Deployment "test-new-deployment": +&Deployment{ObjectMeta:{test-new-deployment deployment-9358 d96964cb-0cfc-44d6-8c50-f9468622cd95 43445 3 2021-10-27 15:32:33 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 FieldsV1 {"f:spec":{"f:replicas":{}}} scale} {e2e.test Update apps/v1 2021-10-27 15:32:33 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 15:32:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*4,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00620e748 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-new-deployment-847dcfb7fb" has successfully progressed.,LastUpdateTime:2021-10-27 15:32:35 +0000 UTC,LastTransitionTime:2021-10-27 15:32:33 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2021-10-27 15:32:35 +0000 UTC,LastTransitionTime:2021-10-27 15:32:35 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} + +Oct 27 15:32:35.600: INFO: New ReplicaSet "test-new-deployment-847dcfb7fb" of Deployment "test-new-deployment": +&ReplicaSet{ObjectMeta:{test-new-deployment-847dcfb7fb deployment-9358 c8cd45b0-3e37-4661-a69d-169605e9afcf 43450 3 2021-10-27 15:32:33 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[deployment.kubernetes.io/desired-replicas:4 deployment.kubernetes.io/max-replicas:5 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-new-deployment d96964cb-0cfc-44d6-8c50-f9468622cd95 0xc00620ec87 0xc00620ec88}] [] [{kube-controller-manager Update apps/v1 2021-10-27 15:32:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d96964cb-0cfc-44d6-8c50-f9468622cd95\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 15:32:35 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*4,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 847dcfb7fb,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00620ed28 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:2,FullyLabeledReplicas:2,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} +Oct 27 15:32:35.659: INFO: Pod "test-new-deployment-847dcfb7fb-4vhrg" is not available: +&Pod{ObjectMeta:{test-new-deployment-847dcfb7fb-4vhrg test-new-deployment-847dcfb7fb- deployment-9358 345beb46-a09b-4296-a9a1-526f4a95e2f5 43448 0 2021-10-27 15:32:35 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet test-new-deployment-847dcfb7fb c8cd45b0-3e37-4661-a69d-169605e9afcf 0xc00620f277 0xc00620f278}] [] [{kube-controller-manager Update v1 2021-10-27 15:32:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c8cd45b0-3e37-4661-a69d-169605e9afcf\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-6spx5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmanu-jzf.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6spx5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:izgw81stpxs0bun38i01tfz,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:32:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:32:35.659: INFO: Pod "test-new-deployment-847dcfb7fb-vdxnv" is available: +&Pod{ObjectMeta:{test-new-deployment-847dcfb7fb-vdxnv test-new-deployment-847dcfb7fb- deployment-9358 2d3c6f29-f3e0-46a2-a6bf-6f150e9f92f4 43436 0 2021-10-27 15:32:33 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/containerID:8d8a1bae85504defb7f6c3b6cc102684207f68231d503293807e2f5f88d1d0cb cni.projectcalico.org/podIP:172.16.1.127/32 cni.projectcalico.org/podIPs:172.16.1.127/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet test-new-deployment-847dcfb7fb c8cd45b0-3e37-4661-a69d-169605e9afcf 0xc00620f480 0xc00620f481}] [] [{kube-controller-manager Update v1 2021-10-27 15:32:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c8cd45b0-3e37-4661-a69d-169605e9afcf\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-27 15:32:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-27 15:32:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.16.1.127\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-nf5zx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmanu-jzf.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nf5zx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:izgw89f23rpcwrl79tpgp1z,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:32:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:32:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:32:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:32:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.8.35,PodIP:172.16.1.127,StartTime:2021-10-27 15:32:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-27 15:32:34 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://4a578c5bac051d17c5bdfc7d2ba553a43a6b0547a6eb4e8ac8417856e841f1ff,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.16.1.127,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:32:35.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-9358" for this suite. +•{"msg":"PASSED [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]","total":346,"completed":333,"skipped":5930,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] EndpointSlice + should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:32:35.672: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename endpointslice +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in endpointslice-1307 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 +[It] should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: referencing a single matching pod +STEP: referencing matching pods with named port +STEP: creating empty Endpoints and EndpointSlices for no matching Pods +STEP: recreating EndpointSlices after they've been deleted +Oct 27 15:32:56.250: INFO: EndpointSlice for Service endpointslice-1307/example-named-port not found +[AfterEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:33:06.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "endpointslice-1307" for this suite. +•{"msg":"PASSED [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","total":346,"completed":334,"skipped":5974,"failed":0} +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:33:06.275: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-3036 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0666 on tmpfs +Oct 27 15:33:06.433: INFO: Waiting up to 5m0s for pod "pod-ad66df37-bcbf-4ea1-90d7-50bd77f3de68" in namespace "emptydir-3036" to be "Succeeded or Failed" +Oct 27 15:33:06.437: INFO: Pod "pod-ad66df37-bcbf-4ea1-90d7-50bd77f3de68": Phase="Pending", Reason="", readiness=false. Elapsed: 4.215881ms +Oct 27 15:33:08.455: INFO: Pod "pod-ad66df37-bcbf-4ea1-90d7-50bd77f3de68": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.022229905s +STEP: Saw pod success +Oct 27 15:33:08.455: INFO: Pod "pod-ad66df37-bcbf-4ea1-90d7-50bd77f3de68" satisfied condition "Succeeded or Failed" +Oct 27 15:33:08.460: INFO: Trying to get logs from node izgw81stpxs0bun38i01tfz pod pod-ad66df37-bcbf-4ea1-90d7-50bd77f3de68 container test-container: +STEP: delete the pod +Oct 27 15:33:08.523: INFO: Waiting for pod pod-ad66df37-bcbf-4ea1-90d7-50bd77f3de68 to disappear +Oct 27 15:33:08.527: INFO: Pod pod-ad66df37-bcbf-4ea1-90d7-50bd77f3de68 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:33:08.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-3036" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":335,"skipped":5996,"failed":0} +S +------------------------------ +[sig-node] Kubelet when scheduling a busybox command that always fails in a pod + should have an terminated reason [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:33:08.540: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubelet-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubelet-test-9586 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 +[BeforeEach] when scheduling a busybox command that always fails in a pod + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:82 +[It] should have an terminated reason [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:33:12.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubelet-test-9586" for this suite. +•{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":346,"completed":336,"skipped":5997,"failed":0} +SSSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should not be blocked by dependency circle [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:33:12.722: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename gc +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-8219 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not be blocked by dependency circle [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:33:12.908: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"ffe8c7bf-bbb1-41a5-8313-a5b8363eca21", Controller:(*bool)(0xc0025ba046), BlockOwnerDeletion:(*bool)(0xc0025ba047)}} +Oct 27 15:33:12.915: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"40363134-df3b-4a5a-a30a-69ce7974b65e", Controller:(*bool)(0xc0052f82be), BlockOwnerDeletion:(*bool)(0xc0052f82bf)}} +Oct 27 15:33:12.921: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"98b6e329-922b-473c-8224-8672209374d0", Controller:(*bool)(0xc004d03f76), BlockOwnerDeletion:(*bool)(0xc004d03f77)}} +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:33:17.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-8219" for this suite. +•{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":346,"completed":337,"skipped":6005,"failed":0} +SSS +------------------------------ +[sig-apps] Deployment + deployment should support proportional scaling [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:33:17.945: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename deployment +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in deployment-9936 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 +[It] deployment should support proportional scaling [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:33:53.353: INFO: Creating deployment "webserver-deployment" +Oct 27 15:33:53.359: INFO: Waiting for observed generation 1 +Oct 27 15:33:55.368: INFO: Waiting for all required pods to come up +Oct 27 15:33:55.376: INFO: Pod name httpd: Found 10 pods out of 10 +STEP: ensuring each pod is running +Oct 27 15:33:57.389: INFO: Waiting for deployment "webserver-deployment" to complete +Oct 27 15:33:57.399: INFO: Updating deployment "webserver-deployment" with a non-existent image +Oct 27 15:33:57.410: INFO: Updating deployment webserver-deployment +Oct 27 15:33:57.410: INFO: Waiting for observed generation 2 +Oct 27 15:33:59.422: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 +Oct 27 15:33:59.426: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 +Oct 27 15:33:59.430: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas +Oct 27 15:33:59.444: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 +Oct 27 15:33:59.444: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 +Oct 27 15:33:59.448: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas +Oct 27 15:33:59.456: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas +Oct 27 15:33:59.456: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 +Oct 27 15:33:59.467: INFO: Updating deployment webserver-deployment +Oct 27 15:33:59.467: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas +Oct 27 15:33:59.476: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 +Oct 27 15:33:59.481: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 +Oct 27 15:34:01.495: INFO: Deployment "webserver-deployment": +&Deployment{ObjectMeta:{webserver-deployment deployment-9936 c40c094d-e677-4281-97a9-d96928f28f32 44172 3 2021-10-27 15:33:53 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-10-27 15:33:53 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 15:33:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00381f518 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2021-10-27 15:33:59 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-795d758f88" is progressing.,LastUpdateTime:2021-10-27 15:33:59 +0000 UTC,LastTransitionTime:2021-10-27 15:33:53 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} + +Oct 27 15:34:01.500: INFO: New ReplicaSet "webserver-deployment-795d758f88" of Deployment "webserver-deployment": +&ReplicaSet{ObjectMeta:{webserver-deployment-795d758f88 deployment-9936 693ff601-de1a-4c34-90d4-76b00851f344 44167 3 2021-10-27 15:33:57 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment c40c094d-e677-4281-97a9-d96928f28f32 0xc00381f937 0xc00381f938}] [] [{kube-controller-manager Update apps/v1 2021-10-27 15:33:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c40c094d-e677-4281-97a9-d96928f28f32\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 15:33:57 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 795d758f88,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00381f9d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Oct 27 15:34:01.500: INFO: All old ReplicaSets of Deployment "webserver-deployment": +Oct 27 15:34:01.501: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-847dcfb7fb deployment-9936 b548669b-8ec6-4e1a-9965-a0ee45d98c38 44168 3 2021-10-27 15:33:53 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment c40c094d-e677-4281-97a9-d96928f28f32 0xc00381fa37 0xc00381fa38}] [] [{kube-controller-manager Update apps/v1 2021-10-27 15:33:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c40c094d-e677-4281-97a9-d96928f28f32\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 15:33:55 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 847dcfb7fb,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00381fac8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} +Oct 27 15:34:01.515: INFO: Pod "webserver-deployment-795d758f88-47dv8" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-47dv8 webserver-deployment-795d758f88- deployment-9936 0bb1ede6-baf1-42bb-a08b-cb7f5978f5b4 44189 0 2021-10-27 15:33:59 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[cni.projectcalico.org/containerID:5b9ca21b60bbed8b366cc6823cfebdb4d1675a79d9027f841c2ab5e204d0f13c cni.projectcalico.org/podIP:172.16.0.108/32 cni.projectcalico.org/podIPs:172.16.0.108/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 693ff601-de1a-4c34-90d4-76b00851f344 0xc00381ffa7 0xc00381ffa8}] [] [{kube-controller-manager Update v1 2021-10-27 15:33:59 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"693ff601-de1a-4c34-90d4-76b00851f344\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:33:59 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2021-10-27 15:34:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-5wdgl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmanu-jzf.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5wdgl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:izgw81stpxs0bun38i01tfz,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.8.34,PodIP:,StartTime:2021-10-27 15:33:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:34:01.515: INFO: Pod "webserver-deployment-795d758f88-5j4rn" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-5j4rn webserver-deployment-795d758f88- deployment-9936 90837145-8925-49dd-8f55-941e624dfa76 44191 0 2021-10-27 15:33:59 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[cni.projectcalico.org/containerID:63e900c8282f95fc166d94cc08173ffbac1574313de8da778f520b0b3ebc733f cni.projectcalico.org/podIP:172.16.1.140/32 cni.projectcalico.org/podIPs:172.16.1.140/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 693ff601-de1a-4c34-90d4-76b00851f344 0xc003cec2e0 0xc003cec2e1}] [] [{kube-controller-manager Update v1 2021-10-27 15:33:59 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"693ff601-de1a-4c34-90d4-76b00851f344\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:33:59 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2021-10-27 15:34:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-p59b5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmanu-jzf.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p59b5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:izgw89f23rpcwrl79tpgp1z,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.8.35,PodIP:,StartTime:2021-10-27 15:33:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:34:01.516: INFO: Pod "webserver-deployment-795d758f88-6wzbn" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-6wzbn webserver-deployment-795d758f88- deployment-9936 4992241a-58aa-4a32-a276-bf170f9abd34 44206 0 2021-10-27 15:33:59 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[cni.projectcalico.org/containerID:6c5a9e37fba04a41044067a9b12ed7f4f24a19e35962e343aad17f6be2253044 cni.projectcalico.org/podIP:172.16.0.113/32 cni.projectcalico.org/podIPs:172.16.0.113/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 693ff601-de1a-4c34-90d4-76b00851f344 0xc003cec7f0 0xc003cec7f1}] [] [{kube-controller-manager Update v1 2021-10-27 15:33:59 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"693ff601-de1a-4c34-90d4-76b00851f344\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:33:59 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2021-10-27 15:34:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-cctlf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmanu-jzf.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cctlf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:izgw81stpxs0bun38i01tfz,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.8.34,PodIP:,StartTime:2021-10-27 15:33:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:34:01.516: INFO: Pod "webserver-deployment-795d758f88-8ftt4" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-8ftt4 webserver-deployment-795d758f88- deployment-9936 5ed97539-782a-4a5f-91d9-715fc4dc2e7c 44198 0 2021-10-27 15:33:59 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[cni.projectcalico.org/containerID:1d60091f2de9ec02eaae3dfa523477360abefd8e91a73ac59ca8ca3b4bec44eb cni.projectcalico.org/podIP:172.16.1.144/32 cni.projectcalico.org/podIPs:172.16.1.144/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 693ff601-de1a-4c34-90d4-76b00851f344 0xc003cecb70 0xc003cecb71}] [] [{kube-controller-manager Update v1 2021-10-27 15:33:59 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"693ff601-de1a-4c34-90d4-76b00851f344\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:33:59 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2021-10-27 15:34:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-h2b72,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmanu-jzf.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h2b72,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:izgw89f23rpcwrl79tpgp1z,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.8.35,PodIP:,StartTime:2021-10-27 15:33:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:34:01.516: INFO: Pod "webserver-deployment-795d758f88-8rlj7" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-8rlj7 webserver-deployment-795d758f88- deployment-9936 36203333-e114-49fe-bffb-aaaab69df64d 44194 0 2021-10-27 15:33:57 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[cni.projectcalico.org/containerID:d95dda920c7b503be457cddd8b9e45e372d3ab62aaa8c41e1de031ee51403bcf cni.projectcalico.org/podIP:172.16.0.104/32 cni.projectcalico.org/podIPs:172.16.0.104/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 693ff601-de1a-4c34-90d4-76b00851f344 0xc003cecf90 0xc003cecf91}] [] [{calico Update v1 2021-10-27 15:33:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kube-controller-manager Update v1 2021-10-27 15:33:57 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"693ff601-de1a-4c34-90d4-76b00851f344\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:34:00 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.16.0.104\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-hnqnd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmanu-jzf.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hnqnd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:izgw81stpxs0bun38i01tfz,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:57 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:57 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.8.34,PodIP:172.16.0.104,StartTime:2021-10-27 15:33:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.16.0.104,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:34:01.516: INFO: Pod "webserver-deployment-795d758f88-d4bm4" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-d4bm4 webserver-deployment-795d758f88- deployment-9936 603e2b96-eca0-4c33-b23f-43afe5e27662 44201 0 2021-10-27 15:33:59 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[cni.projectcalico.org/containerID:e89b7e903b3279fb1c3db9cee966e4179bf2d48a50d5ce50efd455fb61ae3f5c cni.projectcalico.org/podIP:172.16.1.146/32 cni.projectcalico.org/podIPs:172.16.1.146/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 693ff601-de1a-4c34-90d4-76b00851f344 0xc003ced380 0xc003ced381}] [] [{kube-controller-manager Update v1 2021-10-27 15:33:59 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"693ff601-de1a-4c34-90d4-76b00851f344\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-27 15:34:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-27 15:34:00 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-m7tbd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmanu-jzf.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m7tbd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:izgw89f23rpcwrl79tpgp1z,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.8.35,PodIP:,StartTime:2021-10-27 15:33:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:34:01.516: INFO: Pod "webserver-deployment-795d758f88-j9b4z" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-j9b4z webserver-deployment-795d758f88- deployment-9936 2ee7367b-45a7-4ae4-84fe-29a243f8bce9 44102 0 2021-10-27 15:33:57 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[cni.projectcalico.org/containerID:95f3365c81a6cea4a5c31420899e172f5a24463d4c698cb47f99f3958fe5509b cni.projectcalico.org/podIP:172.16.1.136/32 cni.projectcalico.org/podIPs:172.16.1.136/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 693ff601-de1a-4c34-90d4-76b00851f344 0xc003ced610 0xc003ced611}] [] [{kube-controller-manager Update v1 2021-10-27 15:33:57 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"693ff601-de1a-4c34-90d4-76b00851f344\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:33:57 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2021-10-27 15:33:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-jjrcl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmanu-jzf.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jjrcl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:izgw89f23rpcwrl79tpgp1z,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:57 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:57 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.8.35,PodIP:,StartTime:2021-10-27 15:33:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:34:01.516: INFO: Pod "webserver-deployment-795d758f88-jtlrf" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-jtlrf webserver-deployment-795d758f88- deployment-9936 c8a69d32-7fd0-4b87-959c-92833111ae14 44197 0 2021-10-27 15:33:59 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[cni.projectcalico.org/containerID:32720241cbb292c682c4804f290f31e52196fa54d33b649ff1c27f2ce11863eb cni.projectcalico.org/podIP:172.16.0.109/32 cni.projectcalico.org/podIPs:172.16.0.109/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 693ff601-de1a-4c34-90d4-76b00851f344 0xc003ced820 0xc003ced821}] [] [{kube-controller-manager Update v1 2021-10-27 15:33:59 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"693ff601-de1a-4c34-90d4-76b00851f344\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:33:59 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2021-10-27 15:34:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-rwf59,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmanu-jzf.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rwf59,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:izgw81stpxs0bun38i01tfz,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.8.34,PodIP:,StartTime:2021-10-27 15:33:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:34:01.516: INFO: Pod "webserver-deployment-795d758f88-mlqnz" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-mlqnz webserver-deployment-795d758f88- deployment-9936 973526c7-eb27-4e8a-a978-91abff03a322 44104 0 2021-10-27 15:33:57 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[cni.projectcalico.org/containerID:b20d94488edbf8ea775ab3797e1ad9b129950b68985f53220a3649e702c6104d cni.projectcalico.org/podIP:172.16.0.105/32 cni.projectcalico.org/podIPs:172.16.0.105/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 693ff601-de1a-4c34-90d4-76b00851f344 0xc003ceda30 0xc003ceda31}] [] [{kube-controller-manager Update v1 2021-10-27 15:33:57 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"693ff601-de1a-4c34-90d4-76b00851f344\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:33:57 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2021-10-27 15:33:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-bb4q5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmanu-jzf.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bb4q5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:izgw81stpxs0bun38i01tfz,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:57 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:57 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.8.34,PodIP:,StartTime:2021-10-27 15:33:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:34:01.517: INFO: Pod "webserver-deployment-795d758f88-qpnsd" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-qpnsd webserver-deployment-795d758f88- deployment-9936 794dd3e8-69f7-4c82-8c53-c5c93213eb8e 44195 0 2021-10-27 15:33:57 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[cni.projectcalico.org/containerID:2891d23a827480c9127925ea5841bf328303f427cc345427ed88dd729f812b4e cni.projectcalico.org/podIP:172.16.1.135/32 cni.projectcalico.org/podIPs:172.16.1.135/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 693ff601-de1a-4c34-90d4-76b00851f344 0xc003cedc60 0xc003cedc61}] [] [{kube-controller-manager Update v1 2021-10-27 15:33:57 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"693ff601-de1a-4c34-90d4-76b00851f344\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-27 15:33:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-27 15:34:00 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.16.1.135\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-b2x97,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmanu-jzf.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b2x97,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:izgw89f23rpcwrl79tpgp1z,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:57 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:57 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.8.35,PodIP:172.16.1.135,StartTime:2021-10-27 15:33:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.16.1.135,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:34:01.517: INFO: Pod "webserver-deployment-795d758f88-t9h7t" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-t9h7t webserver-deployment-795d758f88- deployment-9936 d40f818c-fa04-44c7-8f2b-7335e243fdb2 44203 0 2021-10-27 15:33:59 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[cni.projectcalico.org/containerID:4f02d9ed95f2164f7155ada320dd307bf398412c70e23853c6857e50befc151d cni.projectcalico.org/podIP:172.16.0.110/32 cni.projectcalico.org/podIPs:172.16.0.110/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 693ff601-de1a-4c34-90d4-76b00851f344 0xc003cedea0 0xc003cedea1}] [] [{kube-controller-manager Update v1 2021-10-27 15:33:59 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"693ff601-de1a-4c34-90d4-76b00851f344\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:33:59 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2021-10-27 15:34:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-z7dz2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmanu-jzf.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z7dz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:izgw81stpxs0bun38i01tfz,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.8.34,PodIP:,StartTime:2021-10-27 15:33:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:34:01.517: INFO: Pod "webserver-deployment-795d758f88-vfvff" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-vfvff webserver-deployment-795d758f88- deployment-9936 fcc1c586-8d78-4e4b-bc6c-9d6bc0399123 44187 0 2021-10-27 15:33:59 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[cni.projectcalico.org/containerID:cc693b672e5bd5b2030d9ca0e234da25b16eaaa70420c9f7bcbfa967dccb2e48 cni.projectcalico.org/podIP:172.16.1.138/32 cni.projectcalico.org/podIPs:172.16.1.138/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 693ff601-de1a-4c34-90d4-76b00851f344 0xc0036580b0 0xc0036580b1}] [] [{kube-controller-manager Update v1 2021-10-27 15:33:59 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"693ff601-de1a-4c34-90d4-76b00851f344\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:33:59 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2021-10-27 15:34:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-s2bjc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmanu-jzf.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s2bjc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:izgw89f23rpcwrl79tpgp1z,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.8.35,PodIP:,StartTime:2021-10-27 15:33:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:34:01.517: INFO: Pod "webserver-deployment-795d758f88-xw5k2" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-xw5k2 webserver-deployment-795d758f88- deployment-9936 3aa6f89e-3ac7-4f62-9a88-e6447d66949e 44103 0 2021-10-27 15:33:57 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[cni.projectcalico.org/containerID:16acb3fbf58cf41087d86330d6c8ae3fb5a415afe9c2d8da300cf5af0e991f3d cni.projectcalico.org/podIP:172.16.1.137/32 cni.projectcalico.org/podIPs:172.16.1.137/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 693ff601-de1a-4c34-90d4-76b00851f344 0xc0036582c0 0xc0036582c1}] [] [{kube-controller-manager Update v1 2021-10-27 15:33:57 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"693ff601-de1a-4c34-90d4-76b00851f344\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:33:57 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2021-10-27 15:33:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-6lz46,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmanu-jzf.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6lz46,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:izgw89f23rpcwrl79tpgp1z,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:57 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:57 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.8.35,PodIP:,StartTime:2021-10-27 15:33:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:34:01.517: INFO: Pod "webserver-deployment-847dcfb7fb-2nhbs" is not available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-2nhbs webserver-deployment-847dcfb7fb- deployment-9936 11ce1553-23eb-462c-9c9d-8243e52239f4 44190 0 2021-10-27 15:33:59 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/containerID:3a799970043dfd605c0ab90d91280a34c0087f0b3afd01e5fe475a868286825b cni.projectcalico.org/podIP:172.16.0.107/32 cni.projectcalico.org/podIPs:172.16.0.107/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb b548669b-8ec6-4e1a-9965-a0ee45d98c38 0xc0036584d0 0xc0036584d1}] [] [{kube-controller-manager Update v1 2021-10-27 15:33:59 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b548669b-8ec6-4e1a-9965-a0ee45d98c38\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:33:59 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2021-10-27 15:34:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-6plgn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmanu-jzf.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6plgn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:izgw81stpxs0bun38i01tfz,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.8.34,PodIP:,StartTime:2021-10-27 15:33:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:34:01.517: INFO: Pod "webserver-deployment-847dcfb7fb-2slcr" is available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-2slcr webserver-deployment-847dcfb7fb- deployment-9936 d56c89c8-1292-4d24-a5f2-73a25b72f750 44039 0 2021-10-27 15:33:53 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/containerID:8824192cd0a14da5bfdb700544eadb283656e77d2d597e83458b3d527e8e8340 cni.projectcalico.org/podIP:172.16.1.129/32 cni.projectcalico.org/podIPs:172.16.1.129/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb b548669b-8ec6-4e1a-9965-a0ee45d98c38 0xc0036586c7 0xc0036586c8}] [] [{calico Update v1 2021-10-27 15:33:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kube-controller-manager Update v1 2021-10-27 15:33:53 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b548669b-8ec6-4e1a-9965-a0ee45d98c38\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:33:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.16.1.129\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-7zr84,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmanu-jzf.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7zr84,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:izgw89f23rpcwrl79tpgp1z,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.8.35,PodIP:172.16.1.129,StartTime:2021-10-27 15:33:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-27 15:33:54 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://e37d4808434f51120ae59b0ead5f65b20c10716b92b341e43b3e829e174ae61e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.16.1.129,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:34:01.518: INFO: Pod "webserver-deployment-847dcfb7fb-8ms8c" is not available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-8ms8c webserver-deployment-847dcfb7fb- deployment-9936 50b33068-0d47-4405-b724-08512988ba3e 44192 0 2021-10-27 15:33:59 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/containerID:7531326be849f6097e8b2aabd7719d343716edf54a11f2b97330d9be1a9eaa34 cni.projectcalico.org/podIP:172.16.1.142/32 cni.projectcalico.org/podIPs:172.16.1.142/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb b548669b-8ec6-4e1a-9965-a0ee45d98c38 0xc0036588d7 0xc0036588d8}] [] [{kube-controller-manager Update v1 2021-10-27 15:33:59 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b548669b-8ec6-4e1a-9965-a0ee45d98c38\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:33:59 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2021-10-27 15:34:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-prbf4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmanu-jzf.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-prbf4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:izgw89f23rpcwrl79tpgp1z,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.8.35,PodIP:,StartTime:2021-10-27 15:33:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:34:01.518: INFO: Pod "webserver-deployment-847dcfb7fb-bbdgb" is not available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-bbdgb webserver-deployment-847dcfb7fb- deployment-9936 7dfcbdc2-f257-427b-88db-d9d4ac6df072 44207 0 2021-10-27 15:33:59 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/containerID:6c53cd0747fac960bef9ceca90a4518f1e1964f197d144f79a9be34a8aeb320e cni.projectcalico.org/podIP:172.16.0.114/32 cni.projectcalico.org/podIPs:172.16.0.114/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb b548669b-8ec6-4e1a-9965-a0ee45d98c38 0xc003658ad7 0xc003658ad8}] [] [{kube-controller-manager Update v1 2021-10-27 15:33:59 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b548669b-8ec6-4e1a-9965-a0ee45d98c38\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:34:00 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2021-10-27 15:34:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-27dbx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmanu-jzf.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-27dbx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:izgw81stpxs0bun38i01tfz,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.8.34,PodIP:,StartTime:2021-10-27 15:33:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:34:01.518: INFO: Pod "webserver-deployment-847dcfb7fb-hsn7l" is not available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-hsn7l webserver-deployment-847dcfb7fb- deployment-9936 61fe7a91-fc68-4592-b6ce-71b59e59d826 44183 0 2021-10-27 15:33:59 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb b548669b-8ec6-4e1a-9965-a0ee45d98c38 0xc003658cc7 0xc003658cc8}] [] [{kube-controller-manager Update v1 2021-10-27 15:33:59 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b548669b-8ec6-4e1a-9965-a0ee45d98c38\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:34:00 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-8brtl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmanu-jzf.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8brtl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:izgw81stpxs0bun38i01tfz,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.8.34,PodIP:,StartTime:2021-10-27 15:33:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:34:01.518: INFO: Pod "webserver-deployment-847dcfb7fb-jf77r" is available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-jf77r webserver-deployment-847dcfb7fb- deployment-9936 4e6812b5-e9ed-4de9-a52d-fcc70bf0acf5 44027 0 2021-10-27 15:33:53 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/containerID:6ec23655f27c92b4be0597a2c9d6aab1c4e03ec622cb8f568d96931c68882ef4 cni.projectcalico.org/podIP:172.16.0.101/32 cni.projectcalico.org/podIPs:172.16.0.101/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb b548669b-8ec6-4e1a-9965-a0ee45d98c38 0xc003659087 0xc003659088}] [] [{kube-controller-manager Update v1 2021-10-27 15:33:53 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b548669b-8ec6-4e1a-9965-a0ee45d98c38\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-27 15:33:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-27 15:33:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.16.0.101\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-zqxsj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmanu-jzf.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zqxsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:izgw81stpxs0bun38i01tfz,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.8.34,PodIP:172.16.0.101,StartTime:2021-10-27 15:33:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-27 15:33:54 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://ffb3859dd0c8e89ff4a48be192c40dc85fdb7fd79733abc211fb06539bbeb04a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.16.0.101,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:34:01.518: INFO: Pod "webserver-deployment-847dcfb7fb-jfpls" is not available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-jfpls webserver-deployment-847dcfb7fb- deployment-9936 896cc1c9-8657-4131-a9e6-193ab628734a 44193 0 2021-10-27 15:33:59 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/containerID:f4acb9be2034af0411bf645fb323974f46a0b9b1c9e63572431bd5f37323d6b2 cni.projectcalico.org/podIP:172.16.1.141/32 cni.projectcalico.org/podIPs:172.16.1.141/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb b548669b-8ec6-4e1a-9965-a0ee45d98c38 0xc0036593e7 0xc0036593e8}] [] [{kube-controller-manager Update v1 2021-10-27 15:33:59 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b548669b-8ec6-4e1a-9965-a0ee45d98c38\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:33:59 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2021-10-27 15:34:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-vdplr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmanu-jzf.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vdplr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:izgw89f23rpcwrl79tpgp1z,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.8.35,PodIP:,StartTime:2021-10-27 15:33:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:34:01.519: INFO: Pod "webserver-deployment-847dcfb7fb-kpcws" is not available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-kpcws webserver-deployment-847dcfb7fb- deployment-9936 fa0008c9-0758-4abc-b75d-a84491157e6e 44200 0 2021-10-27 15:33:59 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/containerID:b950b8f03792ab580f7e1b27960c48e50d543cd0e5e74de5a7d9419d864e5b38 cni.projectcalico.org/podIP:172.16.1.145/32 cni.projectcalico.org/podIPs:172.16.1.145/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb b548669b-8ec6-4e1a-9965-a0ee45d98c38 0xc003659687 0xc003659688}] [] [{kube-controller-manager Update v1 2021-10-27 15:33:59 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b548669b-8ec6-4e1a-9965-a0ee45d98c38\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:33:59 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2021-10-27 15:34:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-s8nxc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmanu-jzf.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s8nxc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:izgw89f23rpcwrl79tpgp1z,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.8.35,PodIP:,StartTime:2021-10-27 15:33:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:34:01.519: INFO: Pod "webserver-deployment-847dcfb7fb-l7nw6" is available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-l7nw6 webserver-deployment-847dcfb7fb- deployment-9936 0b43bdae-18af-40ba-bcfe-9dbefe7ef98d 44030 0 2021-10-27 15:33:53 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/containerID:32d1b4b823374f65e32619cf789f11b4f72db58ca38f60fdf4f2db742a3c85e1 cni.projectcalico.org/podIP:172.16.0.102/32 cni.projectcalico.org/podIPs:172.16.0.102/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb b548669b-8ec6-4e1a-9965-a0ee45d98c38 0xc003659887 0xc003659888}] [] [{kube-controller-manager Update v1 2021-10-27 15:33:53 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b548669b-8ec6-4e1a-9965-a0ee45d98c38\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-27 15:33:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-27 15:33:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.16.0.102\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-zv8pf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmanu-jzf.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zv8pf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:izgw81stpxs0bun38i01tfz,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.8.34,PodIP:172.16.0.102,StartTime:2021-10-27 15:33:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-27 15:33:54 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://f66fab7257bbb0934d62055057e39f0b411daccf3a8c13690cd50d434cdedd3e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.16.0.102,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:34:01.519: INFO: Pod "webserver-deployment-847dcfb7fb-lnbhb" is not available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-lnbhb webserver-deployment-847dcfb7fb- deployment-9936 8956dd63-5c00-4582-b356-caf14d3c2c22 44205 0 2021-10-27 15:33:59 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/containerID:47e3dcd6f0c0a105eaa033a571ea00fc65fda3145b8a5f030a35d1e7323b2401 cni.projectcalico.org/podIP:172.16.0.112/32 cni.projectcalico.org/podIPs:172.16.0.112/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb b548669b-8ec6-4e1a-9965-a0ee45d98c38 0xc003659aa7 0xc003659aa8}] [] [{kube-controller-manager Update v1 2021-10-27 15:33:59 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b548669b-8ec6-4e1a-9965-a0ee45d98c38\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:33:59 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2021-10-27 15:34:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-5625p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmanu-jzf.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5625p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:izgw81stpxs0bun38i01tfz,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.8.34,PodIP:,StartTime:2021-10-27 15:33:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:34:01.519: INFO: Pod "webserver-deployment-847dcfb7fb-nmtzv" is available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-nmtzv webserver-deployment-847dcfb7fb- deployment-9936 36dede95-670e-4f94-a729-f334721f6db2 44048 0 2021-10-27 15:33:53 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/containerID:d46d8e80627c7f009f3d420cc77a1a00ae8951d63faca7eb14d209cc8486f5ca cni.projectcalico.org/podIP:172.16.1.131/32 cni.projectcalico.org/podIPs:172.16.1.131/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb b548669b-8ec6-4e1a-9965-a0ee45d98c38 0xc003659ca7 0xc003659ca8}] [] [{kube-controller-manager Update v1 2021-10-27 15:33:53 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b548669b-8ec6-4e1a-9965-a0ee45d98c38\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-27 15:33:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-27 15:33:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.16.1.131\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-gg24j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmanu-jzf.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gg24j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:izgw89f23rpcwrl79tpgp1z,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.8.35,PodIP:172.16.1.131,StartTime:2021-10-27 15:33:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-27 15:33:55 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://4e7db95e7d54cd2adef94e1aa8918e43f0fdbe7040a6f683c496448881b18f1c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.16.1.131,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:34:01.519: INFO: Pod "webserver-deployment-847dcfb7fb-nnmrm" is available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-nnmrm webserver-deployment-847dcfb7fb- deployment-9936 a6110bc4-bbbf-47f3-a6a1-a30b648dfad1 44042 0 2021-10-27 15:33:53 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/containerID:8447240c42c322a96704c66d3c9186ccae80b80c06c7d1f06e9727a314e883c8 cni.projectcalico.org/podIP:172.16.1.132/32 cni.projectcalico.org/podIPs:172.16.1.132/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb b548669b-8ec6-4e1a-9965-a0ee45d98c38 0xc003659ec7 0xc003659ec8}] [] [{kube-controller-manager Update v1 2021-10-27 15:33:53 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b548669b-8ec6-4e1a-9965-a0ee45d98c38\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-27 15:33:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-27 15:33:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.16.1.132\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-jsdgl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmanu-jzf.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jsdgl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:izgw89f23rpcwrl79tpgp1z,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.8.35,PodIP:172.16.1.132,StartTime:2021-10-27 15:33:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-27 15:33:54 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://df47fd7d9d6e63c25063e65719001f9e1288cd45c6ff9c72b202cc69be4504c6,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.16.1.132,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:34:01.519: INFO: Pod "webserver-deployment-847dcfb7fb-rdj28" is not available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-rdj28 webserver-deployment-847dcfb7fb- deployment-9936 498c4173-5c14-4669-97fa-9f5b031b2b95 44188 0 2021-10-27 15:33:59 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/containerID:a1d53c1ab37ffa127219dc34b65aff45fcd5811f17c08a8b11a149b3233d81ba cni.projectcalico.org/podIP:172.16.1.139/32 cni.projectcalico.org/podIPs:172.16.1.139/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb b548669b-8ec6-4e1a-9965-a0ee45d98c38 0xc0050100e7 0xc0050100e8}] [] [{kube-controller-manager Update v1 2021-10-27 15:33:59 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b548669b-8ec6-4e1a-9965-a0ee45d98c38\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-27 15:34:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-27 15:34:00 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-6lzm8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmanu-jzf.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6lzm8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:izgw89f23rpcwrl79tpgp1z,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.8.35,PodIP:,StartTime:2021-10-27 15:33:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:34:01.520: INFO: Pod "webserver-deployment-847dcfb7fb-rvvmw" is not available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-rvvmw webserver-deployment-847dcfb7fb- deployment-9936 c3367403-ae09-4875-89c3-becdd2c655d8 44202 0 2021-10-27 15:33:59 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/containerID:e9276aae773e68702772b021dd3392c3903f2fbade40c3908371c74993c2b488 cni.projectcalico.org/podIP:172.16.1.147/32 cni.projectcalico.org/podIPs:172.16.1.147/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb b548669b-8ec6-4e1a-9965-a0ee45d98c38 0xc0050102e7 0xc0050102e8}] [] [{kube-controller-manager Update v1 2021-10-27 15:33:59 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b548669b-8ec6-4e1a-9965-a0ee45d98c38\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-27 15:34:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-27 15:34:00 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-c2hfb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmanu-jzf.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c2hfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:izgw89f23rpcwrl79tpgp1z,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.8.35,PodIP:,StartTime:2021-10-27 15:33:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:34:01.520: INFO: Pod "webserver-deployment-847dcfb7fb-sb7vz" is available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-sb7vz webserver-deployment-847dcfb7fb- deployment-9936 abd0fcdd-2e8c-413a-b764-d0cff343c5ca 44024 0 2021-10-27 15:33:53 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/containerID:25baa314e8c2c7eabd21105e36911a421d8f4ce6bb03d2379bbe68a84e05ddf7 cni.projectcalico.org/podIP:172.16.0.103/32 cni.projectcalico.org/podIPs:172.16.0.103/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb b548669b-8ec6-4e1a-9965-a0ee45d98c38 0xc0050104e7 0xc0050104e8}] [] [{kube-controller-manager Update v1 2021-10-27 15:33:53 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b548669b-8ec6-4e1a-9965-a0ee45d98c38\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-27 15:33:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-27 15:33:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.16.0.103\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-kvtqh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmanu-jzf.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kvtqh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:izgw81stpxs0bun38i01tfz,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.8.34,PodIP:172.16.0.103,StartTime:2021-10-27 15:33:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-27 15:33:54 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://8c8da7b2bd4e23396b6c3a73c9be293db2ebd6771ef74c66ab50cfd1fa7b8772,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.16.0.103,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:34:01.520: INFO: Pod "webserver-deployment-847dcfb7fb-sgc4c" is not available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-sgc4c webserver-deployment-847dcfb7fb- deployment-9936 0b002de3-f3f9-4d79-9dda-6dce0411f31e 44185 0 2021-10-27 15:33:59 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/containerID:3be93a82422f1bcc8548d68ba7e718403522895ed901b6eb3d074b592866b1bc cni.projectcalico.org/podIP:172.16.0.106/32 cni.projectcalico.org/podIPs:172.16.0.106/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb b548669b-8ec6-4e1a-9965-a0ee45d98c38 0xc005010707 0xc005010708}] [] [{kube-controller-manager Update v1 2021-10-27 15:33:59 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b548669b-8ec6-4e1a-9965-a0ee45d98c38\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:33:59 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2021-10-27 15:34:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-cqjxl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmanu-jzf.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cqjxl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:izgw81stpxs0bun38i01tfz,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.8.34,PodIP:,StartTime:2021-10-27 15:33:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:34:01.520: INFO: Pod "webserver-deployment-847dcfb7fb-sxh6t" is available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-sxh6t webserver-deployment-847dcfb7fb- deployment-9936 12796460-7720-4ebc-9da1-a4d792cb99d3 44021 0 2021-10-27 15:33:53 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/containerID:462adff32363f14f91f38741d46e710a5927258a0d6b02b011742042190c1671 cni.projectcalico.org/podIP:172.16.0.100/32 cni.projectcalico.org/podIPs:172.16.0.100/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb b548669b-8ec6-4e1a-9965-a0ee45d98c38 0xc005010d47 0xc005010d48}] [] [{calico Update v1 2021-10-27 15:33:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kube-controller-manager Update v1 2021-10-27 15:33:53 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b548669b-8ec6-4e1a-9965-a0ee45d98c38\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:33:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.16.0.100\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-w9ddr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmanu-jzf.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w9ddr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:izgw81stpxs0bun38i01tfz,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.8.34,PodIP:172.16.0.100,StartTime:2021-10-27 15:33:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-27 15:33:54 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://76242e09e2a185d055aed5550975e771e9bc3f3ebab406f686ccc21e1af6d00f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.16.0.100,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:34:01.520: INFO: Pod "webserver-deployment-847dcfb7fb-tkmgh" is not available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-tkmgh webserver-deployment-847dcfb7fb- deployment-9936 82362f07-16ad-4018-aaa1-2f989eb51749 44196 0 2021-10-27 15:33:59 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/containerID:5093b32ea17cd3d3053a2c8a1322888eed0910cf57cd754ee17ddda5725281c5 cni.projectcalico.org/podIP:172.16.1.143/32 cni.projectcalico.org/podIPs:172.16.1.143/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb b548669b-8ec6-4e1a-9965-a0ee45d98c38 0xc005010f57 0xc005010f58}] [] [{kube-controller-manager Update v1 2021-10-27 15:33:59 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b548669b-8ec6-4e1a-9965-a0ee45d98c38\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:33:59 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2021-10-27 15:34:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-rxdnm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmanu-jzf.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rxdnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:izgw89f23rpcwrl79tpgp1z,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.8.35,PodIP:,StartTime:2021-10-27 15:33:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:34:01.520: INFO: Pod "webserver-deployment-847dcfb7fb-w6csw" is not available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-w6csw webserver-deployment-847dcfb7fb- deployment-9936 31916b4b-0d26-4aac-b979-aebcad1dd964 44204 0 2021-10-27 15:33:59 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/containerID:3457801df3649f8dfb3615e5ce1df0f228a55052a0b32b8870b49e9843c09d96 cni.projectcalico.org/podIP:172.16.0.111/32 cni.projectcalico.org/podIPs:172.16.0.111/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb b548669b-8ec6-4e1a-9965-a0ee45d98c38 0xc005011157 0xc005011158}] [] [{kube-controller-manager Update v1 2021-10-27 15:33:59 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b548669b-8ec6-4e1a-9965-a0ee45d98c38\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:33:59 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2021-10-27 15:34:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-hx9rf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmanu-jzf.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hx9rf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:izgw81stpxs0bun38i01tfz,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.8.34,PodIP:,StartTime:2021-10-27 15:33:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:34:01.520: INFO: Pod "webserver-deployment-847dcfb7fb-wbzsz" is available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-wbzsz webserver-deployment-847dcfb7fb- deployment-9936 9f5f80a0-c6fd-407f-90d7-4102a4ac284d 44036 0 2021-10-27 15:33:53 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/containerID:ccb4404a4b6c0fb00066b49457f7a2435fdec5c7346f05ab7596680c7992a319 cni.projectcalico.org/podIP:172.16.1.134/32 cni.projectcalico.org/podIPs:172.16.1.134/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb b548669b-8ec6-4e1a-9965-a0ee45d98c38 0xc005011357 0xc005011358}] [] [{kube-controller-manager Update v1 2021-10-27 15:33:53 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b548669b-8ec6-4e1a-9965-a0ee45d98c38\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-27 15:33:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-27 15:33:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.16.1.134\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-k2qv5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmanu-jzf.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k2qv5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:izgw89f23rpcwrl79tpgp1z,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:33:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.8.35,PodIP:172.16.1.134,StartTime:2021-10-27 15:33:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-27 15:33:55 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://9eb66b144a71bd06eb1c8aa32ebbad91d231f61e8629c650bfc151e3fe3e39bd,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.16.1.134,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:34:01.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-9936" for this suite. +•{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":346,"completed":338,"skipped":6008,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicationController + should serve a basic image on each replica with a public image [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:34:01.533: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename replication-controller +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replication-controller-6835 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 +[It] should serve a basic image on each replica with a public image [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating replication controller my-hostname-basic-e5335093-d252-4a73-a122-e7988b6173d9 +Oct 27 15:34:01.693: INFO: Pod name my-hostname-basic-e5335093-d252-4a73-a122-e7988b6173d9: Found 0 pods out of 1 +Oct 27 15:34:06.765: INFO: Pod name my-hostname-basic-e5335093-d252-4a73-a122-e7988b6173d9: Found 1 pods out of 1 +Oct 27 15:34:06.765: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-e5335093-d252-4a73-a122-e7988b6173d9" are running +Oct 27 15:34:06.860: INFO: Pod "my-hostname-basic-e5335093-d252-4a73-a122-e7988b6173d9-qf297" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-27 15:34:01 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-27 15:34:03 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-27 15:34:03 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-27 15:34:01 +0000 UTC Reason: Message:}]) +Oct 27 15:34:06.860: INFO: Trying to dial the pod +Oct 27 15:34:31.901: INFO: Controller my-hostname-basic-e5335093-d252-4a73-a122-e7988b6173d9: Failed to GET from replica 1 [my-hostname-basic-e5335093-d252-4a73-a122-e7988b6173d9-qf297]: the server is currently unable to handle the request (get pods my-hostname-basic-e5335093-d252-4a73-a122-e7988b6173d9-qf297) +pod status: v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770945641, loc:(*time.Location)(0xa09bc80)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770945643, loc:(*time.Location)(0xa09bc80)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770945643, loc:(*time.Location)(0xa09bc80)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770945641, loc:(*time.Location)(0xa09bc80)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.250.8.34", PodIP:"172.16.0.116", PodIPs:[]v1.PodIP{v1.PodIP{IP:"172.16.0.116"}}, StartTime:(*v1.Time)(0xc004a37758), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"my-hostname-basic-e5335093-d252-4a73-a122-e7988b6173d9", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc004a37770), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1", ContainerID:"containerd://18711170d2330a419d69aed5ec2094d5b703649d59f3713f4f8dcff338d89be6", Started:(*bool)(0xc00594c21b)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)} +Oct 27 15:34:36.935: INFO: Controller my-hostname-basic-e5335093-d252-4a73-a122-e7988b6173d9: Got expected result from replica 1 [my-hostname-basic-e5335093-d252-4a73-a122-e7988b6173d9-qf297]: "my-hostname-basic-e5335093-d252-4a73-a122-e7988b6173d9-qf297", 1 of 1 required successes so far +[AfterEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:34:36.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replication-controller-6835" for this suite. +•{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":346,"completed":339,"skipped":6033,"failed":0} + +------------------------------ +[sig-network] Networking Granular Checks: Pods + should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Networking + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:34:36.950: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename pod-network-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pod-network-test-7624 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Performing setup for networking test in namespace pod-network-test-7624 +STEP: creating a selector +STEP: Creating the service pods in kubernetes +Oct 27 15:34:37.104: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +Oct 27 15:34:37.143: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:34:39.149: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 15:34:41.149: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 15:34:43.149: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 15:34:45.149: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 15:34:47.159: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 15:34:49.149: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 15:34:51.150: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 15:34:53.159: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 15:34:55.149: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 15:34:57.149: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 15:34:59.149: INFO: The status of Pod netserver-0 is Running (Ready = true) +Oct 27 15:34:59.158: INFO: The status of Pod netserver-1 is Running (Ready = true) +STEP: Creating test pods +Oct 27 15:35:01.209: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 +Oct 27 15:35:01.209: INFO: Going to poll 172.16.0.117 on port 8083 at least 0 times, with a maximum of 34 tries before failing +Oct 27 15:35:01.214: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.16.0.117:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7624 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 15:35:01.214: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 15:35:01.424: INFO: Found all 1 expected endpoints: [netserver-0] +Oct 27 15:35:01.425: INFO: Going to poll 172.16.1.148 on port 8083 at least 0 times, with a maximum of 34 tries before failing +Oct 27 15:35:01.429: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.16.1.148:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7624 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 15:35:01.430: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 15:35:01.703: INFO: Found all 1 expected endpoints: [netserver-1] +[AfterEach] [sig-network] Networking + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:35:01.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pod-network-test-7624" for this suite. +•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":340,"skipped":6033,"failed":0} + +------------------------------ +[sig-network] Services + should serve a basic endpoint from pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:35:01.717: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-5516 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should serve a basic endpoint from pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating service endpoint-test2 in namespace services-5516 +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5516 to expose endpoints map[] +Oct 27 15:35:01.882: INFO: Failed go get Endpoints object: endpoints "endpoint-test2" not found +Oct 27 15:35:02.927: INFO: successfully validated that service endpoint-test2 in namespace services-5516 exposes endpoints map[] +STEP: Creating pod pod1 in namespace services-5516 +Oct 27 15:35:02.943: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:35:04.949: INFO: The status of Pod pod1 is Running (Ready = true) +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5516 to expose endpoints map[pod1:[80]] +Oct 27 15:35:04.969: INFO: successfully validated that service endpoint-test2 in namespace services-5516 exposes endpoints map[pod1:[80]] +STEP: Checking if the Service forwards traffic to pod1 +Oct 27 15:35:04.969: INFO: Creating new exec pod +Oct 27 15:35:07.990: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-5516 exec execpodwfvff -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80' +Oct 27 15:35:08.254: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 endpoint-test2 80\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n" +Oct 27 15:35:08.254: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 15:35:08.254: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-5516 exec execpodwfvff -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.26.56.141 80' +Oct 27 15:35:08.536: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.26.56.141 80\nConnection to 172.26.56.141 80 port [tcp/http] succeeded!\n" +Oct 27 15:35:08.536: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +STEP: Creating pod pod2 in namespace services-5516 +Oct 27 15:35:08.552: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:35:10.558: INFO: The status of Pod pod2 is Running (Ready = true) +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5516 to expose endpoints map[pod1:[80] pod2:[80]] +Oct 27 15:35:10.584: INFO: successfully validated that service endpoint-test2 in namespace services-5516 exposes endpoints map[pod1:[80] pod2:[80]] +STEP: Checking if the Service forwards traffic to pod1 and pod2 +Oct 27 15:35:11.585: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-5516 exec execpodwfvff -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80' +Oct 27 15:35:11.911: INFO: stderr: "+ nc -v -t -w 2 endpoint-test2 80\n+ echo hostName\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n" +Oct 27 15:35:11.911: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 15:35:11.911: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-5516 exec execpodwfvff -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.26.56.141 80' +Oct 27 15:35:12.204: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.26.56.141 80\nConnection to 172.26.56.141 80 port [tcp/http] succeeded!\n" +Oct 27 15:35:12.204: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +STEP: Deleting pod pod1 in namespace services-5516 +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5516 to expose endpoints map[pod2:[80]] +Oct 27 15:35:12.237: INFO: successfully validated that service endpoint-test2 in namespace services-5516 exposes endpoints map[pod2:[80]] +STEP: Checking if the Service forwards traffic to pod2 +Oct 27 15:35:13.238: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-5516 exec execpodwfvff -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80' +Oct 27 15:35:13.545: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 endpoint-test2 80\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n" +Oct 27 15:35:13.545: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 15:35:13.545: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmanu-jzf.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-5516 exec execpodwfvff -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.26.56.141 80' +Oct 27 15:35:13.808: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.26.56.141 80\nConnection to 172.26.56.141 80 port [tcp/http] succeeded!\n" +Oct 27 15:35:13.808: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +STEP: Deleting pod pod2 in namespace services-5516 +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5516 to expose endpoints map[] +Oct 27 15:35:13.828: INFO: successfully validated that service endpoint-test2 in namespace services-5516 exposes endpoints map[] +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:35:13.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-5516" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":346,"completed":341,"skipped":6033,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:35:13.851: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename resourcequota +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-6582 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Counting existing ResourceQuota +STEP: Creating a ResourceQuota +STEP: Ensuring resource quota status is calculated +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:35:21.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-6582" for this suite. +•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":346,"completed":342,"skipped":6074,"failed":0} +SS +------------------------------ +[sig-node] Security Context + should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:35:21.027: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename security-context +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-748 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser +Oct 27 15:35:21.188: INFO: Waiting up to 5m0s for pod "security-context-d7fbc2fe-81e3-4db6-b4fa-d066fe1a922f" in namespace "security-context-748" to be "Succeeded or Failed" +Oct 27 15:35:21.192: INFO: Pod "security-context-d7fbc2fe-81e3-4db6-b4fa-d066fe1a922f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.289914ms +Oct 27 15:35:23.198: INFO: Pod "security-context-d7fbc2fe-81e3-4db6-b4fa-d066fe1a922f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010171393s +STEP: Saw pod success +Oct 27 15:35:23.198: INFO: Pod "security-context-d7fbc2fe-81e3-4db6-b4fa-d066fe1a922f" satisfied condition "Succeeded or Failed" +Oct 27 15:35:23.203: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod security-context-d7fbc2fe-81e3-4db6-b4fa-d066fe1a922f container test-container: +STEP: delete the pod +Oct 27 15:35:23.265: INFO: Waiting for pod security-context-d7fbc2fe-81e3-4db6-b4fa-d066fe1a922f to disappear +Oct 27 15:35:23.269: INFO: Pod security-context-d7fbc2fe-81e3-4db6-b4fa-d066fe1a922f no longer exists +[AfterEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:35:23.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "security-context-748" for this suite. +•{"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":346,"completed":343,"skipped":6076,"failed":0} +SS +------------------------------ +[sig-apps] DisruptionController + should update/patch PodDisruptionBudget status [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:35:23.283: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename disruption +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in disruption-4888 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 +[It] should update/patch PodDisruptionBudget status [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Waiting for the pdb to be processed +STEP: Updating PodDisruptionBudget status +STEP: Waiting for all pods to be running +Oct 27 15:35:25.465: INFO: running pods: 0 < 1 +STEP: locating a running pod +STEP: Waiting for the pdb to be processed +STEP: Patching PodDisruptionBudget status +STEP: Waiting for the pdb to be processed +[AfterEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:35:27.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "disruption-4888" for this suite. +•{"msg":"PASSED [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]","total":346,"completed":344,"skipped":6078,"failed":0} + +------------------------------ +[sig-node] Downward API + should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:35:27.522: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-7365 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward api env vars +Oct 27 15:35:27.683: INFO: Waiting up to 5m0s for pod "downward-api-111f1e29-b26e-48c2-b244-8c2362ab0310" in namespace "downward-api-7365" to be "Succeeded or Failed" +Oct 27 15:35:27.688: INFO: Pod "downward-api-111f1e29-b26e-48c2-b244-8c2362ab0310": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046325ms +Oct 27 15:35:29.694: INFO: Pod "downward-api-111f1e29-b26e-48c2-b244-8c2362ab0310": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.01052093s +STEP: Saw pod success +Oct 27 15:35:29.694: INFO: Pod "downward-api-111f1e29-b26e-48c2-b244-8c2362ab0310" satisfied condition "Succeeded or Failed" +Oct 27 15:35:29.699: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod downward-api-111f1e29-b26e-48c2-b244-8c2362ab0310 container dapi-container: +STEP: delete the pod +Oct 27 15:35:29.717: INFO: Waiting for pod downward-api-111f1e29-b26e-48c2-b244-8c2362ab0310 to disappear +Oct 27 15:35:29.721: INFO: Pod downward-api-111f1e29-b26e-48c2-b244-8c2362ab0310 no longer exists +[AfterEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:35:29.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-7365" for this suite. +•{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":346,"completed":345,"skipped":6078,"failed":0} +SSSSSSS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:35:29.734: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename subpath +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in subpath-224 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] Atomic writer volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 +STEP: Setting up data +[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating pod pod-subpath-test-configmap-2jps +STEP: Creating a pod to test atomic-volume-subpath +Oct 27 15:35:29.907: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-2jps" in namespace "subpath-224" to be "Succeeded or Failed" +Oct 27 15:35:29.912: INFO: Pod "pod-subpath-test-configmap-2jps": Phase="Pending", Reason="", readiness=false. Elapsed: 4.857889ms +Oct 27 15:35:31.918: INFO: Pod "pod-subpath-test-configmap-2jps": Phase="Running", Reason="", readiness=true. Elapsed: 2.010948689s +Oct 27 15:35:33.924: INFO: Pod "pod-subpath-test-configmap-2jps": Phase="Running", Reason="", readiness=true. Elapsed: 4.017198038s +Oct 27 15:35:35.931: INFO: Pod "pod-subpath-test-configmap-2jps": Phase="Running", Reason="", readiness=true. Elapsed: 6.023538634s +Oct 27 15:35:37.937: INFO: Pod "pod-subpath-test-configmap-2jps": Phase="Running", Reason="", readiness=true. Elapsed: 8.029729542s +Oct 27 15:35:39.943: INFO: Pod "pod-subpath-test-configmap-2jps": Phase="Running", Reason="", readiness=true. Elapsed: 10.035887598s +Oct 27 15:35:41.950: INFO: Pod "pod-subpath-test-configmap-2jps": Phase="Running", Reason="", readiness=true. Elapsed: 12.042754369s +Oct 27 15:35:43.956: INFO: Pod "pod-subpath-test-configmap-2jps": Phase="Running", Reason="", readiness=true. Elapsed: 14.049348888s +Oct 27 15:35:46.065: INFO: Pod "pod-subpath-test-configmap-2jps": Phase="Running", Reason="", readiness=true. Elapsed: 16.158407705s +Oct 27 15:35:48.072: INFO: Pod "pod-subpath-test-configmap-2jps": Phase="Running", Reason="", readiness=true. Elapsed: 18.164874944s +Oct 27 15:35:50.078: INFO: Pod "pod-subpath-test-configmap-2jps": Phase="Running", Reason="", readiness=true. Elapsed: 20.17143519s +Oct 27 15:35:52.085: INFO: Pod "pod-subpath-test-configmap-2jps": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.178440269s +STEP: Saw pod success +Oct 27 15:35:52.085: INFO: Pod "pod-subpath-test-configmap-2jps" satisfied condition "Succeeded or Failed" +Oct 27 15:35:52.090: INFO: Trying to get logs from node izgw89f23rpcwrl79tpgp1z pod pod-subpath-test-configmap-2jps container test-container-subpath-configmap-2jps: +STEP: delete the pod +Oct 27 15:35:52.109: INFO: Waiting for pod pod-subpath-test-configmap-2jps to disappear +Oct 27 15:35:52.113: INFO: Pod pod-subpath-test-configmap-2jps no longer exists +STEP: Deleting pod pod-subpath-test-configmap-2jps +Oct 27 15:35:52.113: INFO: Deleting pod "pod-subpath-test-configmap-2jps" in namespace "subpath-224" +[AfterEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:35:52.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "subpath-224" for this suite. +•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":346,"completed":346,"skipped":6085,"failed":0} +SOct 27 15:35:52.130: INFO: Running AfterSuite actions on all nodes +Oct 27 15:35:52.130: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2 +Oct 27 15:35:52.130: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 +Oct 27 15:35:52.130: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2 +Oct 27 15:35:52.130: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 +Oct 27 15:35:52.130: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 +Oct 27 15:35:52.130: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 +Oct 27 15:35:52.130: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 +Oct 27 15:35:52.130: INFO: Running AfterSuite actions on node 1 +Oct 27 15:35:52.130: INFO: Skipping dumping logs from cluster + +JUnit report was created: /tmp/e2e/artifacts/1635343228/junit_01.xml +{"msg":"Test Suite completed","total":346,"completed":346,"skipped":6086,"failed":0} + +Ran 346 of 6432 Specs in 5720.254 seconds +SUCCESS! -- 346 Passed | 0 Failed | 0 Flaked | 0 Pending | 6086 Skipped +PASS + +Ginkgo ran 1 suite in 1h35m23.09664486s +Test Suite Passed diff --git a/v1.22/gardener-alicloud/junit_01.xml b/v1.22/gardener-alicloud/junit_01.xml new file mode 100644 index 0000000000..18cad148fd --- /dev/null +++ b/v1.22/gardener-alicloud/junit_01.xml @@ -0,0 +1,18607 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/v1.22/gardener-aws/PRODUCT.yaml b/v1.22/gardener-aws/PRODUCT.yaml new file mode 100644 index 0000000000..fbf2ebb8a0 --- /dev/null +++ b/v1.22/gardener-aws/PRODUCT.yaml @@ -0,0 +1,9 @@ +vendor: SAP +name: Gardener (https://github.com/gardener/gardener) shoot cluster deployed on AWS +version: v1.34.0 +website_url: https://gardener.cloud +repo_url: https://github.com/gardener/ +documentation_url: https://github.com/gardener/documentation/wiki +product_logo_url: https://raw.githubusercontent.com/gardener/documentation/master/images/logo_w_saplogo.svg +type: installer +description: The Gardener implements automated management and operation of Kubernetes clusters as a service and aims to support that service on multiple Cloud providers. \ No newline at end of file diff --git a/v1.22/gardener-aws/README.md b/v1.22/gardener-aws/README.md new file mode 100644 index 0000000000..647dbcb2f7 --- /dev/null +++ b/v1.22/gardener-aws/README.md @@ -0,0 +1,80 @@ +# Reproducing the test results: + +## Install Gardener on your Kubernetes Landscape +Check out https://github.com/gardener/garden-setup for a more detailed instruction and additional information. To install Gardener in your base cluster, a command line tool [sow](https://github.com/gardener/sow) is used. Use the provided Docker image that already contains `sow` and all required tools. To execute `sow` you call a [wrapper script](https://github.com/gardener/sow/tree/master/docker/bin) which starts `sow` in a Docker container (Docker will download the image from [eu.gcr.io/gardener-project/sow](http://eu.gcr.io/gardener-project/sow) if it is not available locally yet). Docker executes the sow command with the given arguments, and mounts parts of your file system into that container so that `sow` can read configuration files for the installation of Gardener components, and can persist the state of your installation. After `sow`'s execution Docker removes the container again. + +1. Clone the `sow` repository and add the path to our [wrapper script](https://github.com/gardener/sow/tree/master/docker/bin) to your `PATH` variable so you can call `sow` on the command line. + + ```bash + # setup for calling sow via the wrapper + git clone "https://github.com/gardener/sow" + cd sow + export PATH=$PATH:$PWD/docker/bin + ``` + +2. Create a directory `landscape` for your Gardener landscape and clone this repository into a subdirectory called `crop`: + + ```bash + cd .. + mkdir landscape + cd landscape + git clone "https://github.com/gardener/garden-setup" crop + ``` + +3. If you don't have your `kubekonfig` stored locally somewhere yet, download it. For example, for GKE you would use the following command: + + ```bash + gcloud container clusters get-credentials --zone --project + ``` + +4. Save your `kubeconfig` somewhere in your `landscape` directory. For the remaining steps we will assume that you saved it using file path `landscape/kubeconfig`. + +5. In your `landscape` directory, create a configuration file called `acre.yaml`. The structure of the configuration file is described [below](#configuration-file-acreyaml). Note that the relative file path `./kubeconfig` file must be specified in field `landscape.cluster.kubeconfig` in the configuration file. Checkout [configuration file acre](https://github.com/gardener/garden-setup#configuration-file-acreyaml) for configuration details. + + > Do not use file `acre.yaml` in directory `crop`. This file is used internally by the installation tool. + +6. If you created the base cluster using GKE convert your `kubeconfig` file to one that uses basic authentication with Google-specific configuration parameters: + + ```bash + sow convertkubeconfig + ``` + When asked for credentials, enter the ones that the GKE dashboard shows when clicking on `show credentials`. + + `sow` will replace the file specified in `landscape.cluster.kubeconfig` of your `acre.yaml` file by a kubeconfig file that uses basic authentication. + +7. In your first terminal window, use the following command to check in which order the components will be installed. Nothing will be deployed yet and you can test this way if your syntax in `acre.yaml` is correct: + + ```bash + sow order -A + ``` + +8. If there are no error messages, use the following command to deploy Gardener on your base cluster: + + ```bash + sow deploy -A + ``` + +9. `sow` now starts to install Gardener in your base cluster. The installation can take about 30 minutes. `sow` prints out status messages to the terminal window so that you can check the status of the installation. The other terminal window will show the newly created Kubernetes resources after a while and if their deployment was successful. Wait until the last component is deployed and all created Kubernetes resources are in status `Running`. + +10. Use the following command to find out the URL of the Gardener dashboard. + + ```bash + sow url + ``` + + +## Create Kubernetes Cluster + +Login to SAP Gardener Dashboard to create a Kubernetes Clusters on Amazon Web Services, Microsoft Azure, Google Cloud Platform, Alibaba Cloud, or OpenStack cloud provider. + +## Launch E2E Conformance Tests +Set the `KUBECONFIG` as path to the kubeconfig file of your newly created cluster (you can find the kubeconfig e.g. in the Gardener dashboard). Follow the instructions below to run the Kubernetes e2e conformance tests. Adjust values for arguments `k8sVersion` and `cloudprovider` respective to your new cluster. + +```bash +#first set KUBECONFIG to your cluster +docker run -ti -e --rm -v $KUBECONFIG:/mye2e/shoot.config golang:1.13 bash +# run all commands below within container +go get github.com/gardener/test-infra; cd /go/src/github.com/gardener/test-infra +export GO111MODULE=on; export E2E_EXPORT_PATH=/tmp/export; export KUBECONFIG=/mye2e/shoot.config; export GINKGO_PARALLEL=false +go run -mod=vendor ./integration-tests/e2e --k8sVersion=1.17.1 --cloudprovider=gcp --testcasegroup="conformance" +``` \ No newline at end of file diff --git a/v1.22/gardener-aws/e2e.log b/v1.22/gardener-aws/e2e.log new file mode 100644 index 0000000000..530cd51d43 --- /dev/null +++ b/v1.22/gardener-aws/e2e.log @@ -0,0 +1,13877 @@ +Conformance test: not doing test setup. +I1027 14:00:02.567275 5725 e2e.go:129] Starting e2e run "70495107-0c9e-4c12-bbb2-c9f041d8ff81" on Ginkgo node 1 +{"msg":"Test Suite starting","total":346,"completed":0,"skipped":0,"failed":0} +Running Suite: Kubernetes e2e suite +=================================== +Random Seed: 1635343202 - Will randomize all specs +Will run 346 of 6432 specs + +Oct 27 14:00:04.575: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 14:00:04.576: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable +Oct 27 14:00:05.025: INFO: Waiting up to 10m0s for all pods (need at least 1) in namespace 'kube-system' to be running and ready +Oct 27 14:00:05.409: INFO: 24 / 24 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) +Oct 27 14:00:05.409: INFO: expected 12 pod replicas in namespace 'kube-system', 12 are Running and Ready. +Oct 27 14:00:05.409: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start +Oct 27 14:00:05.505: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'apiserver-proxy' (0 seconds elapsed) +Oct 27 14:00:05.505: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'calico-node' (0 seconds elapsed) +Oct 27 14:00:05.505: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'csi-driver-node' (0 seconds elapsed) +Oct 27 14:00:05.505: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) +Oct 27 14:00:05.505: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-exporter' (0 seconds elapsed) +Oct 27 14:00:05.505: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-problem-detector' (0 seconds elapsed) +Oct 27 14:00:05.505: INFO: e2e test version: v1.22.2 +Oct 27 14:00:05.593: INFO: kube-apiserver version: v1.22.2 +Oct 27 14:00:05.593: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 14:00:05.684: INFO: Cluster IP family: ipv4 +SSSS +------------------------------ +[sig-api-machinery] Namespaces [Serial] + should ensure that all services are removed when a namespace is deleted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:00:05.684: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename namespaces +W1027 14:00:06.043074 5725 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ +Oct 27 14:00:06.043: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled +Oct 27 14:00:06.148: INFO: PSP annotation exists on dry run pod: "extensions.gardener.cloud.provider-aws.csi-driver-node"; assuming PodSecurityPolicy is enabled +W1027 14:00:06.236854 5725 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ +W1027 14:00:06.326456 5725 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ +Oct 27 14:00:06.426: INFO: Found ClusterRoles; assuming RBAC is enabled. +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in namespaces-5526 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should ensure that all services are removed when a namespace is deleted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a test namespace +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nsdeletetest-1325 +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Creating a service in the namespace +STEP: Deleting the namespace +STEP: Waiting for the namespace to be removed. +STEP: Recreating the namespace +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nsdeletetest-6282 +STEP: Verifying there is no service in the namespace +[AfterEach] [sig-api-machinery] Namespaces [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:00:14.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "namespaces-5526" for this suite. +STEP: Destroying namespace "nsdeletetest-1325" for this suite. +Oct 27 14:00:14.965: INFO: Namespace nsdeletetest-1325 was already deleted +STEP: Destroying namespace "nsdeletetest-6282" for this suite. +•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":346,"completed":1,"skipped":4,"failed":0} +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Networking Granular Checks: Pods + should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Networking + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:00:15.055: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename pod-network-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pod-network-test-6197 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Performing setup for networking test in namespace pod-network-test-6197 +STEP: creating a selector +STEP: Creating the service pods in kubernetes +Oct 27 14:00:15.779: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +Oct 27 14:00:16.238: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:00:18.328: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:00:20.329: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:00:22.329: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:00:24.329: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:00:26.329: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:00:28.327: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:00:30.329: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:00:32.328: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:00:34.329: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:00:36.329: INFO: The status of Pod netserver-0 is Running (Ready = true) +Oct 27 14:00:36.508: INFO: The status of Pod netserver-1 is Running (Ready = true) +STEP: Creating test pods +Oct 27 14:00:39.236: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 +Oct 27 14:00:39.236: INFO: Going to poll 100.96.1.5 on port 8083 at least 0 times, with a maximum of 34 tries before failing +Oct 27 14:00:39.325: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6197 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 14:00:39.325: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 14:00:40.057: INFO: Found all 1 expected endpoints: [netserver-0] +Oct 27 14:00:40.057: INFO: Going to poll 100.96.0.16 on port 8083 at least 0 times, with a maximum of 34 tries before failing +Oct 27 14:00:40.147: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.0.16:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6197 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 14:00:40.147: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 14:00:41.019: INFO: Found all 1 expected endpoints: [netserver-1] +[AfterEach] [sig-network] Networking + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:00:41.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pod-network-test-6197" for this suite. +•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":2,"skipped":24,"failed":0} +SSSSSSSS +------------------------------ +[sig-network] Services + should be able to change the type from NodePort to ExternalName [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:00:41.286: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-8927 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should be able to change the type from NodePort to ExternalName [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a service nodeport-service with the type=NodePort in namespace services-8927 +STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service +STEP: creating service externalsvc in namespace services-8927 +STEP: creating replication controller externalsvc in namespace services-8927 +I1027 14:00:42.295422 5725 runners.go:190] Created replication controller with name: externalsvc, namespace: services-8927, replica count: 2 +I1027 14:00:45.396522 5725 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +STEP: changing the NodePort service to type=ExternalName +Oct 27 14:00:45.670: INFO: Creating new exec pod +Oct 27 14:00:47.942: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-8927 exec execpodsw8rh -- /bin/sh -x -c nslookup nodeport-service.services-8927.svc.cluster.local' +Oct 27 14:00:49.101: INFO: stderr: "+ nslookup nodeport-service.services-8927.svc.cluster.local\n" +Oct 27 14:00:49.102: INFO: stdout: "Server:\t\t100.64.0.10\nAddress:\t100.64.0.10#53\n\nnodeport-service.services-8927.svc.cluster.local\tcanonical name = externalsvc.services-8927.svc.cluster.local.\nName:\texternalsvc.services-8927.svc.cluster.local\nAddress: 100.70.197.82\n\n" +STEP: deleting ReplicationController externalsvc in namespace services-8927, will wait for the garbage collector to delete the pods +Oct 27 14:00:49.381: INFO: Deleting ReplicationController externalsvc took: 89.971885ms +Oct 27 14:00:49.482: INFO: Terminating ReplicationController externalsvc pods took: 100.723312ms +Oct 27 14:00:51.580: INFO: Cleaning up the NodePort to ExternalName test service +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:00:51.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-8927" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":346,"completed":3,"skipped":32,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Networking Granular Checks: Pods + should function for intra-pod communication: http [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Networking + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:00:51.853: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename pod-network-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pod-network-test-3410 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should function for intra-pod communication: http [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Performing setup for networking test in namespace pod-network-test-3410 +STEP: creating a selector +STEP: Creating the service pods in kubernetes +Oct 27 14:00:52.577: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +Oct 27 14:00:53.036: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:00:55.125: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:00:57.125: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:00:59.126: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:01:01.127: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:01:03.126: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:01:05.126: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:01:07.127: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:01:09.126: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:01:11.126: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:01:13.126: INFO: The status of Pod netserver-0 is Running (Ready = true) +Oct 27 14:01:13.335: INFO: The status of Pod netserver-1 is Running (Ready = true) +STEP: Creating test pods +Oct 27 14:01:15.787: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 +Oct 27 14:01:15.787: INFO: Breadth first check of 100.96.1.10 on host 10.250.28.25... +Oct 27 14:01:15.876: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://100.96.1.11:9080/dial?request=hostname&protocol=http&host=100.96.1.10&port=8083&tries=1'] Namespace:pod-network-test-3410 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 14:01:15.876: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 14:01:16.603: INFO: Waiting for responses: map[] +Oct 27 14:01:16.604: INFO: reached 100.96.1.10 after 0/1 tries +Oct 27 14:01:16.604: INFO: Breadth first check of 100.96.0.18 on host 10.250.9.48... +Oct 27 14:01:16.693: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://100.96.1.11:9080/dial?request=hostname&protocol=http&host=100.96.0.18&port=8083&tries=1'] Namespace:pod-network-test-3410 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 14:01:16.693: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 14:01:17.396: INFO: Waiting for responses: map[] +Oct 27 14:01:17.396: INFO: reached 100.96.0.18 after 0/1 tries +Oct 27 14:01:17.396: INFO: Going to retry 0 out of 2 pods.... +[AfterEach] [sig-network] Networking + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:01:17.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pod-network-test-3410" for this suite. +•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":346,"completed":4,"skipped":100,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide container's cpu request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:01:17.663: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-5662 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should provide container's cpu request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 27 14:01:18.484: INFO: Waiting up to 5m0s for pod "downwardapi-volume-703a2c1d-38a4-47b0-8704-c960985da5df" in namespace "downward-api-5662" to be "Succeeded or Failed" +Oct 27 14:01:18.573: INFO: Pod "downwardapi-volume-703a2c1d-38a4-47b0-8704-c960985da5df": Phase="Pending", Reason="", readiness=false. Elapsed: 89.158306ms +Oct 27 14:01:20.663: INFO: Pod "downwardapi-volume-703a2c1d-38a4-47b0-8704-c960985da5df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.179257677s +STEP: Saw pod success +Oct 27 14:01:20.663: INFO: Pod "downwardapi-volume-703a2c1d-38a4-47b0-8704-c960985da5df" satisfied condition "Succeeded or Failed" +Oct 27 14:01:20.752: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod downwardapi-volume-703a2c1d-38a4-47b0-8704-c960985da5df container client-container: +STEP: delete the pod +Oct 27 14:01:20.946: INFO: Waiting for pod downwardapi-volume-703a2c1d-38a4-47b0-8704-c960985da5df to disappear +Oct 27 14:01:21.035: INFO: Pod downwardapi-volume-703a2c1d-38a4-47b0-8704-c960985da5df no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:01:21.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-5662" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":346,"completed":5,"skipped":126,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Discovery + should validate PreferredVersion for each APIGroup [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Discovery + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:01:21.302: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename discovery +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in discovery-5392 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] Discovery + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/discovery.go:39 +STEP: Setting up server cert +[It] should validate PreferredVersion for each APIGroup [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:01:22.606: INFO: Checking APIGroup: apiregistration.k8s.io +Oct 27 14:01:22.694: INFO: PreferredVersion.GroupVersion: apiregistration.k8s.io/v1 +Oct 27 14:01:22.694: INFO: Versions found [{apiregistration.k8s.io/v1 v1}] +Oct 27 14:01:22.694: INFO: apiregistration.k8s.io/v1 matches apiregistration.k8s.io/v1 +Oct 27 14:01:22.694: INFO: Checking APIGroup: apps +Oct 27 14:01:22.782: INFO: PreferredVersion.GroupVersion: apps/v1 +Oct 27 14:01:22.782: INFO: Versions found [{apps/v1 v1}] +Oct 27 14:01:22.782: INFO: apps/v1 matches apps/v1 +Oct 27 14:01:22.782: INFO: Checking APIGroup: events.k8s.io +Oct 27 14:01:22.870: INFO: PreferredVersion.GroupVersion: events.k8s.io/v1 +Oct 27 14:01:22.870: INFO: Versions found [{events.k8s.io/v1 v1} {events.k8s.io/v1beta1 v1beta1}] +Oct 27 14:01:22.870: INFO: events.k8s.io/v1 matches events.k8s.io/v1 +Oct 27 14:01:22.870: INFO: Checking APIGroup: authentication.k8s.io +Oct 27 14:01:22.958: INFO: PreferredVersion.GroupVersion: authentication.k8s.io/v1 +Oct 27 14:01:22.963: INFO: Versions found [{authentication.k8s.io/v1 v1}] +Oct 27 14:01:22.963: INFO: authentication.k8s.io/v1 matches authentication.k8s.io/v1 +Oct 27 14:01:22.963: INFO: Checking APIGroup: authorization.k8s.io +Oct 27 14:01:23.051: INFO: PreferredVersion.GroupVersion: authorization.k8s.io/v1 +Oct 27 14:01:23.057: INFO: Versions found [{authorization.k8s.io/v1 v1}] +Oct 27 14:01:23.057: INFO: authorization.k8s.io/v1 matches authorization.k8s.io/v1 +Oct 27 14:01:23.057: INFO: Checking APIGroup: autoscaling +Oct 27 14:01:23.145: INFO: PreferredVersion.GroupVersion: autoscaling/v1 +Oct 27 14:01:23.145: INFO: Versions found [{autoscaling/v1 v1} {autoscaling/v2beta1 v2beta1} {autoscaling/v2beta2 v2beta2}] +Oct 27 14:01:23.145: INFO: autoscaling/v1 matches autoscaling/v1 +Oct 27 14:01:23.145: INFO: Checking APIGroup: batch +Oct 27 14:01:23.233: INFO: PreferredVersion.GroupVersion: batch/v1 +Oct 27 14:01:23.235: INFO: Versions found [{batch/v1 v1} {batch/v1beta1 v1beta1}] +Oct 27 14:01:23.235: INFO: batch/v1 matches batch/v1 +Oct 27 14:01:23.235: INFO: Checking APIGroup: certificates.k8s.io +Oct 27 14:01:23.324: INFO: PreferredVersion.GroupVersion: certificates.k8s.io/v1 +Oct 27 14:01:23.353: INFO: Versions found [{certificates.k8s.io/v1 v1}] +Oct 27 14:01:23.353: INFO: certificates.k8s.io/v1 matches certificates.k8s.io/v1 +Oct 27 14:01:23.353: INFO: Checking APIGroup: networking.k8s.io +Oct 27 14:01:23.441: INFO: PreferredVersion.GroupVersion: networking.k8s.io/v1 +Oct 27 14:01:23.441: INFO: Versions found [{networking.k8s.io/v1 v1}] +Oct 27 14:01:23.441: INFO: networking.k8s.io/v1 matches networking.k8s.io/v1 +Oct 27 14:01:23.441: INFO: Checking APIGroup: policy +Oct 27 14:01:23.530: INFO: PreferredVersion.GroupVersion: policy/v1 +Oct 27 14:01:23.530: INFO: Versions found [{policy/v1 v1} {policy/v1beta1 v1beta1}] +Oct 27 14:01:23.530: INFO: policy/v1 matches policy/v1 +Oct 27 14:01:23.530: INFO: Checking APIGroup: rbac.authorization.k8s.io +Oct 27 14:01:23.618: INFO: PreferredVersion.GroupVersion: rbac.authorization.k8s.io/v1 +Oct 27 14:01:23.618: INFO: Versions found [{rbac.authorization.k8s.io/v1 v1}] +Oct 27 14:01:23.618: INFO: rbac.authorization.k8s.io/v1 matches rbac.authorization.k8s.io/v1 +Oct 27 14:01:23.618: INFO: Checking APIGroup: storage.k8s.io +Oct 27 14:01:23.706: INFO: PreferredVersion.GroupVersion: storage.k8s.io/v1 +Oct 27 14:01:23.706: INFO: Versions found [{storage.k8s.io/v1 v1} {storage.k8s.io/v1beta1 v1beta1}] +Oct 27 14:01:23.706: INFO: storage.k8s.io/v1 matches storage.k8s.io/v1 +Oct 27 14:01:23.706: INFO: Checking APIGroup: admissionregistration.k8s.io +Oct 27 14:01:23.794: INFO: PreferredVersion.GroupVersion: admissionregistration.k8s.io/v1 +Oct 27 14:01:23.794: INFO: Versions found [{admissionregistration.k8s.io/v1 v1}] +Oct 27 14:01:23.794: INFO: admissionregistration.k8s.io/v1 matches admissionregistration.k8s.io/v1 +Oct 27 14:01:23.794: INFO: Checking APIGroup: apiextensions.k8s.io +Oct 27 14:01:23.882: INFO: PreferredVersion.GroupVersion: apiextensions.k8s.io/v1 +Oct 27 14:01:23.882: INFO: Versions found [{apiextensions.k8s.io/v1 v1}] +Oct 27 14:01:23.883: INFO: apiextensions.k8s.io/v1 matches apiextensions.k8s.io/v1 +Oct 27 14:01:23.883: INFO: Checking APIGroup: scheduling.k8s.io +Oct 27 14:01:23.976: INFO: PreferredVersion.GroupVersion: scheduling.k8s.io/v1 +Oct 27 14:01:23.976: INFO: Versions found [{scheduling.k8s.io/v1 v1}] +Oct 27 14:01:23.976: INFO: scheduling.k8s.io/v1 matches scheduling.k8s.io/v1 +Oct 27 14:01:23.976: INFO: Checking APIGroup: coordination.k8s.io +Oct 27 14:01:24.064: INFO: PreferredVersion.GroupVersion: coordination.k8s.io/v1 +Oct 27 14:01:24.064: INFO: Versions found [{coordination.k8s.io/v1 v1}] +Oct 27 14:01:24.064: INFO: coordination.k8s.io/v1 matches coordination.k8s.io/v1 +Oct 27 14:01:24.064: INFO: Checking APIGroup: node.k8s.io +Oct 27 14:01:24.153: INFO: PreferredVersion.GroupVersion: node.k8s.io/v1 +Oct 27 14:01:24.153: INFO: Versions found [{node.k8s.io/v1 v1} {node.k8s.io/v1beta1 v1beta1}] +Oct 27 14:01:24.153: INFO: node.k8s.io/v1 matches node.k8s.io/v1 +Oct 27 14:01:24.153: INFO: Checking APIGroup: discovery.k8s.io +Oct 27 14:01:24.240: INFO: PreferredVersion.GroupVersion: discovery.k8s.io/v1 +Oct 27 14:01:24.260: INFO: Versions found [{discovery.k8s.io/v1 v1} {discovery.k8s.io/v1beta1 v1beta1}] +Oct 27 14:01:24.260: INFO: discovery.k8s.io/v1 matches discovery.k8s.io/v1 +Oct 27 14:01:24.260: INFO: Checking APIGroup: flowcontrol.apiserver.k8s.io +Oct 27 14:01:24.349: INFO: PreferredVersion.GroupVersion: flowcontrol.apiserver.k8s.io/v1beta1 +Oct 27 14:01:24.349: INFO: Versions found [{flowcontrol.apiserver.k8s.io/v1beta1 v1beta1}] +Oct 27 14:01:24.349: INFO: flowcontrol.apiserver.k8s.io/v1beta1 matches flowcontrol.apiserver.k8s.io/v1beta1 +Oct 27 14:01:24.349: INFO: Checking APIGroup: autoscaling.k8s.io +Oct 27 14:01:24.437: INFO: PreferredVersion.GroupVersion: autoscaling.k8s.io/v1 +Oct 27 14:01:24.437: INFO: Versions found [{autoscaling.k8s.io/v1 v1} {autoscaling.k8s.io/v1beta2 v1beta2}] +Oct 27 14:01:24.437: INFO: autoscaling.k8s.io/v1 matches autoscaling.k8s.io/v1 +Oct 27 14:01:24.437: INFO: Checking APIGroup: crd.projectcalico.org +Oct 27 14:01:24.525: INFO: PreferredVersion.GroupVersion: crd.projectcalico.org/v1 +Oct 27 14:01:24.525: INFO: Versions found [{crd.projectcalico.org/v1 v1}] +Oct 27 14:01:24.525: INFO: crd.projectcalico.org/v1 matches crd.projectcalico.org/v1 +Oct 27 14:01:24.525: INFO: Checking APIGroup: cert.gardener.cloud +Oct 27 14:01:24.613: INFO: PreferredVersion.GroupVersion: cert.gardener.cloud/v1alpha1 +Oct 27 14:01:24.613: INFO: Versions found [{cert.gardener.cloud/v1alpha1 v1alpha1}] +Oct 27 14:01:24.613: INFO: cert.gardener.cloud/v1alpha1 matches cert.gardener.cloud/v1alpha1 +Oct 27 14:01:24.613: INFO: Checking APIGroup: dns.gardener.cloud +Oct 27 14:01:24.701: INFO: PreferredVersion.GroupVersion: dns.gardener.cloud/v1alpha1 +Oct 27 14:01:24.701: INFO: Versions found [{dns.gardener.cloud/v1alpha1 v1alpha1}] +Oct 27 14:01:24.701: INFO: dns.gardener.cloud/v1alpha1 matches dns.gardener.cloud/v1alpha1 +Oct 27 14:01:24.701: INFO: Checking APIGroup: snapshot.storage.k8s.io +Oct 27 14:01:24.790: INFO: PreferredVersion.GroupVersion: snapshot.storage.k8s.io/v1beta1 +Oct 27 14:01:24.790: INFO: Versions found [{snapshot.storage.k8s.io/v1beta1 v1beta1}] +Oct 27 14:01:24.790: INFO: snapshot.storage.k8s.io/v1beta1 matches snapshot.storage.k8s.io/v1beta1 +Oct 27 14:01:24.790: INFO: Checking APIGroup: metrics.k8s.io +Oct 27 14:01:24.878: INFO: PreferredVersion.GroupVersion: metrics.k8s.io/v1beta1 +Oct 27 14:01:24.878: INFO: Versions found [{metrics.k8s.io/v1beta1 v1beta1}] +Oct 27 14:01:24.878: INFO: metrics.k8s.io/v1beta1 matches metrics.k8s.io/v1beta1 +[AfterEach] [sig-api-machinery] Discovery + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:01:24.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "discovery-5392" for this suite. +•{"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":346,"completed":6,"skipped":140,"failed":0} + +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:01:25.380: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-5604 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating secret with name secret-test-b178fba5-0510-44f6-9369-77e02f1f67b8 +STEP: Creating a pod to test consume secrets +Oct 27 14:01:26.290: INFO: Waiting up to 5m0s for pod "pod-secrets-0a9d134b-64e3-4ada-a392-54ba6ec7a0d3" in namespace "secrets-5604" to be "Succeeded or Failed" +Oct 27 14:01:26.381: INFO: Pod "pod-secrets-0a9d134b-64e3-4ada-a392-54ba6ec7a0d3": Phase="Pending", Reason="", readiness=false. Elapsed: 89.655465ms +Oct 27 14:01:28.471: INFO: Pod "pod-secrets-0a9d134b-64e3-4ada-a392-54ba6ec7a0d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.179486995s +STEP: Saw pod success +Oct 27 14:01:28.471: INFO: Pod "pod-secrets-0a9d134b-64e3-4ada-a392-54ba6ec7a0d3" satisfied condition "Succeeded or Failed" +Oct 27 14:01:28.560: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod pod-secrets-0a9d134b-64e3-4ada-a392-54ba6ec7a0d3 container secret-volume-test: +STEP: delete the pod +Oct 27 14:01:28.749: INFO: Waiting for pod pod-secrets-0a9d134b-64e3-4ada-a392-54ba6ec7a0d3 to disappear +Oct 27 14:01:28.838: INFO: Pod pod-secrets-0a9d134b-64e3-4ada-a392-54ba6ec7a0d3 no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:01:28.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-5604" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":7,"skipped":140,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Watchers + should receive events on concurrent watches in same order [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:01:29.105: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename watch +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in watch-1894 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should receive events on concurrent watches in same order [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: getting a starting resourceVersion +STEP: starting a background goroutine to produce watch events +STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order +[AfterEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:01:42.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "watch-1894" for this suite. +•{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":346,"completed":8,"skipped":170,"failed":0} +S +------------------------------ +[sig-cli] Kubectl client Kubectl logs + should be able to retrieve and filter logs [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:01:42.991: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-8284 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[BeforeEach] Kubectl logs + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1396 +STEP: creating an pod +Oct 27 14:01:43.743: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-8284 run logs-generator --image=k8s.gcr.io/e2e-test-images/agnhost:2.32 --restart=Never --pod-running-timeout=2m0s -- logs-generator --log-lines-total 100 --run-duration 20s' +Oct 27 14:01:44.163: INFO: stderr: "" +Oct 27 14:01:44.163: INFO: stdout: "pod/logs-generator created\n" +[It] should be able to retrieve and filter logs [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Waiting for log generator to start. +Oct 27 14:01:44.163: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] +Oct 27 14:01:44.163: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-8284" to be "running and ready, or succeeded" +Oct 27 14:01:44.253: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 89.900225ms +Oct 27 14:01:46.342: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 2.179410607s +Oct 27 14:01:46.342: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" +Oct 27 14:01:46.342: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] +STEP: checking for a matching strings +Oct 27 14:01:46.342: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-8284 logs logs-generator logs-generator' +Oct 27 14:01:46.812: INFO: stderr: "" +Oct 27 14:01:46.812: INFO: stdout: "I1027 14:01:45.116401 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/kube-system/pods/xwzd 412\nI1027 14:01:45.316468 1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/wmp 459\nI1027 14:01:45.517211 1 logs_generator.go:76] 2 POST /api/v1/namespaces/ns/pods/wps 488\nI1027 14:01:45.716438 1 logs_generator.go:76] 3 GET /api/v1/namespaces/default/pods/7w7m 218\nI1027 14:01:45.916741 1 logs_generator.go:76] 4 GET /api/v1/namespaces/default/pods/wms 549\nI1027 14:01:46.117034 1 logs_generator.go:76] 5 POST /api/v1/namespaces/default/pods/cn6 404\nI1027 14:01:46.317326 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/kube-system/pods/245 313\nI1027 14:01:46.516553 1 logs_generator.go:76] 7 POST /api/v1/namespaces/kube-system/pods/7f98 585\nI1027 14:01:46.716841 1 logs_generator.go:76] 8 POST /api/v1/namespaces/kube-system/pods/lvr7 568\n" +STEP: limiting log lines +Oct 27 14:01:46.812: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-8284 logs logs-generator logs-generator --tail=1' +Oct 27 14:01:47.275: INFO: stderr: "" +Oct 27 14:01:47.275: INFO: stdout: "I1027 14:01:47.117420 1 logs_generator.go:76] 10 POST /api/v1/namespaces/kube-system/pods/nld7 431\n" +Oct 27 14:01:47.275: INFO: got output "I1027 14:01:47.117420 1 logs_generator.go:76] 10 POST /api/v1/namespaces/kube-system/pods/nld7 431\n" +STEP: limiting log bytes +Oct 27 14:01:47.275: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-8284 logs logs-generator logs-generator --limit-bytes=1' +Oct 27 14:01:47.728: INFO: stderr: "" +Oct 27 14:01:47.728: INFO: stdout: "I" +Oct 27 14:01:47.728: INFO: got output "I" +STEP: exposing timestamps +Oct 27 14:01:47.728: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-8284 logs logs-generator logs-generator --tail=1 --timestamps' +Oct 27 14:01:48.192: INFO: stderr: "" +Oct 27 14:01:48.192: INFO: stdout: "2021-10-27T14:01:48.116920737Z I1027 14:01:48.116760 1 logs_generator.go:76] 15 GET /api/v1/namespaces/ns/pods/prt4 482\n" +Oct 27 14:01:48.192: INFO: got output "2021-10-27T14:01:48.116920737Z I1027 14:01:48.116760 1 logs_generator.go:76] 15 GET /api/v1/namespaces/ns/pods/prt4 482\n" +STEP: restricting to a time range +Oct 27 14:01:50.692: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-8284 logs logs-generator logs-generator --since=1s' +Oct 27 14:01:51.140: INFO: stderr: "" +Oct 27 14:01:51.140: INFO: stdout: "I1027 14:01:50.116402 1 logs_generator.go:76] 25 PUT /api/v1/namespaces/kube-system/pods/v9q 299\nI1027 14:01:50.316692 1 logs_generator.go:76] 26 PUT /api/v1/namespaces/default/pods/d8m 599\nI1027 14:01:50.516985 1 logs_generator.go:76] 27 POST /api/v1/namespaces/default/pods/c8k 472\nI1027 14:01:50.717316 1 logs_generator.go:76] 28 GET /api/v1/namespaces/kube-system/pods/p7m 283\nI1027 14:01:50.916447 1 logs_generator.go:76] 29 PUT /api/v1/namespaces/default/pods/w5m 301\n" +Oct 27 14:01:51.141: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-8284 logs logs-generator logs-generator --since=24h' +Oct 27 14:01:51.620: INFO: stderr: "" +Oct 27 14:01:51.620: INFO: stdout: "I1027 14:01:45.116401 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/kube-system/pods/xwzd 412\nI1027 14:01:45.316468 1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/wmp 459\nI1027 14:01:45.517211 1 logs_generator.go:76] 2 POST /api/v1/namespaces/ns/pods/wps 488\nI1027 14:01:45.716438 1 logs_generator.go:76] 3 GET /api/v1/namespaces/default/pods/7w7m 218\nI1027 14:01:45.916741 1 logs_generator.go:76] 4 GET /api/v1/namespaces/default/pods/wms 549\nI1027 14:01:46.117034 1 logs_generator.go:76] 5 POST /api/v1/namespaces/default/pods/cn6 404\nI1027 14:01:46.317326 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/kube-system/pods/245 313\nI1027 14:01:46.516553 1 logs_generator.go:76] 7 POST /api/v1/namespaces/kube-system/pods/7f98 585\nI1027 14:01:46.716841 1 logs_generator.go:76] 8 POST /api/v1/namespaces/kube-system/pods/lvr7 568\nI1027 14:01:46.917151 1 logs_generator.go:76] 9 GET /api/v1/namespaces/ns/pods/nb7l 285\nI1027 14:01:47.117420 1 logs_generator.go:76] 10 POST /api/v1/namespaces/kube-system/pods/nld7 431\nI1027 14:01:47.316709 1 logs_generator.go:76] 11 GET /api/v1/namespaces/kube-system/pods/t2px 491\nI1027 14:01:47.517004 1 logs_generator.go:76] 12 POST /api/v1/namespaces/default/pods/vbg2 445\nI1027 14:01:47.717294 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/default/pods/v42 279\nI1027 14:01:47.916427 1 logs_generator.go:76] 14 PUT /api/v1/namespaces/ns/pods/qz45 433\nI1027 14:01:48.116760 1 logs_generator.go:76] 15 GET /api/v1/namespaces/ns/pods/prt4 482\nI1027 14:01:48.317049 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/ns/pods/82wn 305\nI1027 14:01:48.517318 1 logs_generator.go:76] 17 POST /api/v1/namespaces/ns/pods/wf6 448\nI1027 14:01:48.716472 1 logs_generator.go:76] 18 POST /api/v1/namespaces/kube-system/pods/w4h 498\nI1027 14:01:48.916760 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/kube-system/pods/nwk 463\nI1027 14:01:49.117050 1 logs_generator.go:76] 20 GET /api/v1/namespaces/kube-system/pods/bgft 536\nI1027 14:01:49.317342 1 logs_generator.go:76] 21 POST /api/v1/namespaces/kube-system/pods/nlzh 272\nI1027 14:01:49.516472 1 logs_generator.go:76] 22 GET /api/v1/namespaces/kube-system/pods/6sx9 359\nI1027 14:01:49.716760 1 logs_generator.go:76] 23 GET /api/v1/namespaces/default/pods/269 586\nI1027 14:01:49.917050 1 logs_generator.go:76] 24 PUT /api/v1/namespaces/kube-system/pods/968v 564\nI1027 14:01:50.116402 1 logs_generator.go:76] 25 PUT /api/v1/namespaces/kube-system/pods/v9q 299\nI1027 14:01:50.316692 1 logs_generator.go:76] 26 PUT /api/v1/namespaces/default/pods/d8m 599\nI1027 14:01:50.516985 1 logs_generator.go:76] 27 POST /api/v1/namespaces/default/pods/c8k 472\nI1027 14:01:50.717316 1 logs_generator.go:76] 28 GET /api/v1/namespaces/kube-system/pods/p7m 283\nI1027 14:01:50.916447 1 logs_generator.go:76] 29 PUT /api/v1/namespaces/default/pods/w5m 301\nI1027 14:01:51.116747 1 logs_generator.go:76] 30 GET /api/v1/namespaces/kube-system/pods/nzfp 479\nI1027 14:01:51.317014 1 logs_generator.go:76] 31 PUT /api/v1/namespaces/kube-system/pods/tkzm 419\nI1027 14:01:51.517308 1 logs_generator.go:76] 32 GET /api/v1/namespaces/kube-system/pods/tq7 399\n" +[AfterEach] Kubectl logs + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1401 +Oct 27 14:01:51.620: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-8284 delete pod logs-generator' +Oct 27 14:01:53.125: INFO: stderr: "" +Oct 27 14:01:53.125: INFO: stdout: "pod \"logs-generator\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:01:53.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-8284" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":346,"completed":9,"skipped":171,"failed":0} +SSS +------------------------------ +[sig-storage] Projected downwardAPI + should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:01:53.392: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-1432 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 27 14:01:54.235: INFO: Waiting up to 5m0s for pod "downwardapi-volume-460f4575-b50a-43b2-89bf-f019947e72d7" in namespace "projected-1432" to be "Succeeded or Failed" +Oct 27 14:01:54.324: INFO: Pod "downwardapi-volume-460f4575-b50a-43b2-89bf-f019947e72d7": Phase="Pending", Reason="", readiness=false. Elapsed: 88.85077ms +Oct 27 14:01:56.414: INFO: Pod "downwardapi-volume-460f4575-b50a-43b2-89bf-f019947e72d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.178900734s +STEP: Saw pod success +Oct 27 14:01:56.414: INFO: Pod "downwardapi-volume-460f4575-b50a-43b2-89bf-f019947e72d7" satisfied condition "Succeeded or Failed" +Oct 27 14:01:56.503: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod downwardapi-volume-460f4575-b50a-43b2-89bf-f019947e72d7 container client-container: +STEP: delete the pod +Oct 27 14:01:56.732: INFO: Waiting for pod downwardapi-volume-460f4575-b50a-43b2-89bf-f019947e72d7 to disappear +Oct 27 14:01:56.821: INFO: Pod downwardapi-volume-460f4575-b50a-43b2-89bf-f019947e72d7 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:01:56.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-1432" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":10,"skipped":174,"failed":0} +SSSSSSSS +------------------------------ +[sig-auth] ServiceAccounts + should mount projected service account token [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:01:57.088: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename svcaccounts +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in svcaccounts-9461 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should mount projected service account token [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test service account token: +Oct 27 14:01:57.945: INFO: Waiting up to 5m0s for pod "test-pod-0179f25a-b592-4b46-8f32-7866268e5ca1" in namespace "svcaccounts-9461" to be "Succeeded or Failed" +Oct 27 14:01:58.101: INFO: Pod "test-pod-0179f25a-b592-4b46-8f32-7866268e5ca1": Phase="Pending", Reason="", readiness=false. Elapsed: 155.412249ms +Oct 27 14:02:00.191: INFO: Pod "test-pod-0179f25a-b592-4b46-8f32-7866268e5ca1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.245719036s +STEP: Saw pod success +Oct 27 14:02:00.191: INFO: Pod "test-pod-0179f25a-b592-4b46-8f32-7866268e5ca1" satisfied condition "Succeeded or Failed" +Oct 27 14:02:00.280: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod test-pod-0179f25a-b592-4b46-8f32-7866268e5ca1 container agnhost-container: +STEP: delete the pod +Oct 27 14:02:00.471: INFO: Waiting for pod test-pod-0179f25a-b592-4b46-8f32-7866268e5ca1 to disappear +Oct 27 14:02:00.560: INFO: Pod test-pod-0179f25a-b592-4b46-8f32-7866268e5ca1 no longer exists +[AfterEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:02:00.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svcaccounts-9461" for this suite. +•{"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":346,"completed":11,"skipped":182,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + binary data should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:02:00.828: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-6044 +STEP: Waiting for a default service account to be provisioned in namespace +[It] binary data should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name configmap-test-upd-a196db4f-60fd-4193-8258-12730695d9a5 +STEP: Creating the pod +STEP: Waiting for pod with text data +STEP: Waiting for pod with binary data +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:02:06.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-6044" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":12,"skipped":195,"failed":0} +S +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + should perform canary updates and phased rolling updates of template modifications [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:02:06.511: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename statefulset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-6038 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:92 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:107 +STEP: Creating service test in namespace statefulset-6038 +[It] should perform canary updates and phased rolling updates of template modifications [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a new StatefulSet +Oct 27 14:02:07.506: INFO: Found 1 stateful pods, waiting for 3 +Oct 27 14:02:17.597: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true +Oct 27 14:02:17.597: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true +Oct 27 14:02:17.597: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false +Oct 27 14:02:27.596: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true +Oct 27 14:02:27.596: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true +Oct 27 14:02:27.596: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true +STEP: Updating stateful set template: update image from k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 to k8s.gcr.io/e2e-test-images/httpd:2.4.39-1 +Oct 27 14:02:28.055: INFO: Updating stateful set ss2 +STEP: Creating a new revision +STEP: Not applying an update when the partition is greater than the number of replicas +STEP: Performing a canary update +Oct 27 14:02:28.426: INFO: Updating stateful set ss2 +Oct 27 14:02:28.606: INFO: Waiting for Pod statefulset-6038/ss2-2 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 +STEP: Restoring Pods to the correct revision when they are deleted +Oct 27 14:02:39.060: INFO: Found 2 stateful pods, waiting for 3 +Oct 27 14:02:49.151: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true +Oct 27 14:02:49.151: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true +Oct 27 14:02:49.151: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true +STEP: Performing a phased rolling update +Oct 27 14:02:49.558: INFO: Updating stateful set ss2 +Oct 27 14:02:49.737: INFO: Waiting for Pod statefulset-6038/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 +Oct 27 14:03:00.106: INFO: Updating stateful set ss2 +Oct 27 14:03:00.285: INFO: Waiting for StatefulSet statefulset-6038/ss2 to complete update +Oct 27 14:03:00.285: INFO: Waiting for Pod statefulset-6038/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118 +Oct 27 14:03:10.466: INFO: Deleting all statefulset in ns statefulset-6038 +Oct 27 14:03:10.555: INFO: Scaling statefulset ss2 to 0 +Oct 27 14:03:20.914: INFO: Waiting for statefulset status.replicas updated to 0 +Oct 27 14:03:21.003: INFO: Deleting statefulset ss2 +[AfterEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:03:21.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-6038" for this suite. +•{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":346,"completed":13,"skipped":196,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:03:21.539: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-3602 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0777 on node default medium +Oct 27 14:03:22.361: INFO: Waiting up to 5m0s for pod "pod-1b6f47d3-e64b-4782-b465-b0255faf34dd" in namespace "emptydir-3602" to be "Succeeded or Failed" +Oct 27 14:03:22.450: INFO: Pod "pod-1b6f47d3-e64b-4782-b465-b0255faf34dd": Phase="Pending", Reason="", readiness=false. Elapsed: 89.435302ms +Oct 27 14:03:24.540: INFO: Pod "pod-1b6f47d3-e64b-4782-b465-b0255faf34dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.179256398s +STEP: Saw pod success +Oct 27 14:03:24.540: INFO: Pod "pod-1b6f47d3-e64b-4782-b465-b0255faf34dd" satisfied condition "Succeeded or Failed" +Oct 27 14:03:24.629: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod pod-1b6f47d3-e64b-4782-b465-b0255faf34dd container test-container: +STEP: delete the pod +Oct 27 14:03:24.820: INFO: Waiting for pod pod-1b6f47d3-e64b-4782-b465-b0255faf34dd to disappear +Oct 27 14:03:24.909: INFO: Pod pod-1b6f47d3-e64b-4782-b465-b0255faf34dd no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:03:24.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-3602" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":14,"skipped":211,"failed":0} +SSSSSSS +------------------------------ +[sig-network] Services + should serve multiport endpoints from pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:03:25.175: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-4513 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should serve multiport endpoints from pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating service multi-endpoint-test in namespace services-4513 +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4513 to expose endpoints map[] +Oct 27 14:03:26.264: INFO: successfully validated that service multi-endpoint-test in namespace services-4513 exposes endpoints map[] +STEP: Creating pod pod1 in namespace services-4513 +Oct 27 14:03:26.448: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:03:28.538: INFO: The status of Pod pod1 is Running (Ready = true) +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4513 to expose endpoints map[pod1:[100]] +Oct 27 14:03:28.983: INFO: successfully validated that service multi-endpoint-test in namespace services-4513 exposes endpoints map[pod1:[100]] +STEP: Creating pod pod2 in namespace services-4513 +Oct 27 14:03:29.166: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:03:31.256: INFO: The status of Pod pod2 is Running (Ready = true) +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4513 to expose endpoints map[pod1:[100] pod2:[101]] +Oct 27 14:03:31.791: INFO: successfully validated that service multi-endpoint-test in namespace services-4513 exposes endpoints map[pod1:[100] pod2:[101]] +STEP: Checking if the Service forwards traffic to pods +Oct 27 14:03:31.791: INFO: Creating new exec pod +Oct 27 14:03:35.067: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-4513 exec execpodxxndl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' +Oct 27 14:03:36.158: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" +Oct 27 14:03:36.158: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 14:03:36.158: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-4513 exec execpodxxndl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.65.17.155 80' +Oct 27 14:03:37.183: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 100.65.17.155 80\nConnection to 100.65.17.155 80 port [tcp/http] succeeded!\n" +Oct 27 14:03:37.183: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 14:03:37.183: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-4513 exec execpodxxndl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' +Oct 27 14:03:38.240: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" +Oct 27 14:03:38.240: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 14:03:38.241: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-4513 exec execpodxxndl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.65.17.155 81' +Oct 27 14:03:39.313: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 100.65.17.155 81\nConnection to 100.65.17.155 81 port [tcp/*] succeeded!\n" +Oct 27 14:03:39.313: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +STEP: Deleting pod pod1 in namespace services-4513 +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4513 to expose endpoints map[pod2:[101]] +Oct 27 14:03:39.763: INFO: successfully validated that service multi-endpoint-test in namespace services-4513 exposes endpoints map[pod2:[101]] +STEP: Deleting pod pod2 in namespace services-4513 +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4513 to expose endpoints map[] +Oct 27 14:03:40.124: INFO: successfully validated that service multi-endpoint-test in namespace services-4513 exposes endpoints map[] +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:03:40.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-4513" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":346,"completed":15,"skipped":218,"failed":0} +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-auth] ServiceAccounts + should mount an API token into pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:03:40.487: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename svcaccounts +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in svcaccounts-2171 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should mount an API token into pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: getting the auto-created API token +STEP: reading a file in the container +Oct 27 14:03:44.256: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl exec --namespace=svcaccounts-2171 pod-service-account-71dc1b2a-d2ff-4546-9fbd-b1fff21f04f3 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' +STEP: reading a file in the container +Oct 27 14:03:45.373: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl exec --namespace=svcaccounts-2171 pod-service-account-71dc1b2a-d2ff-4546-9fbd-b1fff21f04f3 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' +STEP: reading a file in the container +Oct 27 14:03:46.424: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl exec --namespace=svcaccounts-2171 pod-service-account-71dc1b2a-d2ff-4546-9fbd-b1fff21f04f3 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' +[AfterEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:03:47.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svcaccounts-2171" for this suite. +•{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":346,"completed":16,"skipped":236,"failed":0} +SSSS +------------------------------ +[sig-storage] Downward API volume + should provide container's memory request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:03:47.821: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-206 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should provide container's memory request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 27 14:03:48.641: INFO: Waiting up to 5m0s for pod "downwardapi-volume-068916eb-524f-4c90-b16a-7936fe716e4a" in namespace "downward-api-206" to be "Succeeded or Failed" +Oct 27 14:03:48.731: INFO: Pod "downwardapi-volume-068916eb-524f-4c90-b16a-7936fe716e4a": Phase="Pending", Reason="", readiness=false. Elapsed: 89.26545ms +Oct 27 14:03:50.821: INFO: Pod "downwardapi-volume-068916eb-524f-4c90-b16a-7936fe716e4a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.179389813s +STEP: Saw pod success +Oct 27 14:03:50.821: INFO: Pod "downwardapi-volume-068916eb-524f-4c90-b16a-7936fe716e4a" satisfied condition "Succeeded or Failed" +Oct 27 14:03:50.910: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod downwardapi-volume-068916eb-524f-4c90-b16a-7936fe716e4a container client-container: +STEP: delete the pod +Oct 27 14:03:51.098: INFO: Waiting for pod downwardapi-volume-068916eb-524f-4c90-b16a-7936fe716e4a to disappear +Oct 27 14:03:51.187: INFO: Pod downwardapi-volume-068916eb-524f-4c90-b16a-7936fe716e4a no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:03:51.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-206" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":346,"completed":17,"skipped":240,"failed":0} + +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for CRD preserving unknown fields at the schema root [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:03:51.459: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-4493 +STEP: Waiting for a default service account to be provisioned in namespace +[It] works for CRD preserving unknown fields at the schema root [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:03:52.184: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: client-side validation (kubectl create and apply) allows request with any unknown properties +Oct 27 14:03:57.248: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-4493 --namespace=crd-publish-openapi-4493 create -f -' +Oct 27 14:03:58.631: INFO: stderr: "" +Oct 27 14:03:58.631: INFO: stdout: "e2e-test-crd-publish-openapi-5819-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" +Oct 27 14:03:58.631: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-4493 --namespace=crd-publish-openapi-4493 delete e2e-test-crd-publish-openapi-5819-crds test-cr' +Oct 27 14:03:59.056: INFO: stderr: "" +Oct 27 14:03:59.056: INFO: stdout: "e2e-test-crd-publish-openapi-5819-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" +Oct 27 14:03:59.056: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-4493 --namespace=crd-publish-openapi-4493 apply -f -' +Oct 27 14:03:59.783: INFO: stderr: "" +Oct 27 14:03:59.784: INFO: stdout: "e2e-test-crd-publish-openapi-5819-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" +Oct 27 14:03:59.784: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-4493 --namespace=crd-publish-openapi-4493 delete e2e-test-crd-publish-openapi-5819-crds test-cr' +Oct 27 14:04:00.202: INFO: stderr: "" +Oct 27 14:04:00.202: INFO: stdout: "e2e-test-crd-publish-openapi-5819-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" +STEP: kubectl explain works to explain CR +Oct 27 14:04:00.203: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-4493 explain e2e-test-crd-publish-openapi-5819-crds' +Oct 27 14:04:00.676: INFO: stderr: "" +Oct 27 14:04:00.676: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-5819-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:04:05.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-4493" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":346,"completed":18,"skipped":240,"failed":0} +SSSSSSSS +------------------------------ +[sig-api-machinery] Watchers + should be able to restart watching from the last resource version observed by the previous watch [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:04:05.988: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename watch +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in watch-9623 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a watch on configmaps +STEP: creating a new configmap +STEP: modifying the configmap once +STEP: closing the watch once it receives two notifications +Oct 27 14:04:07.080: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-9623 a51951d0-7df6-4367-a9d8-90c846c9145d 5486 0 2021-10-27 14:04:06 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-10-27 14:04:06 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 27 14:04:07.080: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-9623 a51951d0-7df6-4367-a9d8-90c846c9145d 5487 0 2021-10-27 14:04:06 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-10-27 14:04:07 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: modifying the configmap a second time, while the watch is closed +STEP: creating a new watch on configmaps from the last resource version observed by the first watch +STEP: deleting the configmap +STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed +Oct 27 14:04:07.441: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-9623 a51951d0-7df6-4367-a9d8-90c846c9145d 5491 0 2021-10-27 14:04:06 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-10-27 14:04:07 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 27 14:04:07.441: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-9623 a51951d0-7df6-4367-a9d8-90c846c9145d 5492 0 2021-10-27 14:04:06 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-10-27 14:04:07 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +[AfterEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:04:07.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "watch-9623" for this suite. +•{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":346,"completed":19,"skipped":248,"failed":0} +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] DNS + should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:04:07.623: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename dns +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-3622 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a test headless service +STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3622 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3622;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3622 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3622;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3622.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3622.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3622.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3622.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3622.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-3622.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3622.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-3622.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3622.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-3622.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3622.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-3622.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3622.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 135.83.64.100.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/100.64.83.135_udp@PTR;check="$$(dig +tcp +noall +answer +search 135.83.64.100.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/100.64.83.135_tcp@PTR;sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3622 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3622;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3622 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3622;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3622.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3622.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3622.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3622.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3622.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-3622.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3622.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-3622.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3622.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-3622.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3622.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-3622.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3622.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 135.83.64.100.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/100.64.83.135_udp@PTR;check="$$(dig +tcp +noall +answer +search 135.83.64.100.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/100.64.83.135_tcp@PTR;sleep 1; done + +STEP: creating a pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Oct 27 14:04:19.015: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3622/dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e: the server could not find the requested resource (get pods dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e) +Oct 27 14:04:19.153: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3622/dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e: the server could not find the requested resource (get pods dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e) +Oct 27 14:04:19.246: INFO: Unable to read wheezy_udp@dns-test-service.dns-3622 from pod dns-3622/dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e: the server could not find the requested resource (get pods dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e) +Oct 27 14:04:19.340: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3622 from pod dns-3622/dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e: the server could not find the requested resource (get pods dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e) +Oct 27 14:04:19.433: INFO: Unable to read wheezy_udp@dns-test-service.dns-3622.svc from pod dns-3622/dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e: the server could not find the requested resource (get pods dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e) +Oct 27 14:04:19.525: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3622.svc from pod dns-3622/dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e: the server could not find the requested resource (get pods dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e) +Oct 27 14:04:19.625: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3622.svc from pod dns-3622/dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e: the server could not find the requested resource (get pods dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e) +Oct 27 14:04:19.761: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3622.svc from pod dns-3622/dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e: the server could not find the requested resource (get pods dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e) +Oct 27 14:04:20.410: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3622/dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e: the server could not find the requested resource (get pods dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e) +Oct 27 14:04:20.503: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3622/dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e: the server could not find the requested resource (get pods dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e) +Oct 27 14:04:20.596: INFO: Unable to read jessie_udp@dns-test-service.dns-3622 from pod dns-3622/dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e: the server could not find the requested resource (get pods dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e) +Oct 27 14:04:20.688: INFO: Unable to read jessie_tcp@dns-test-service.dns-3622 from pod dns-3622/dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e: the server could not find the requested resource (get pods dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e) +Oct 27 14:04:20.781: INFO: Unable to read jessie_udp@dns-test-service.dns-3622.svc from pod dns-3622/dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e: the server could not find the requested resource (get pods dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e) +Oct 27 14:04:20.874: INFO: Unable to read jessie_tcp@dns-test-service.dns-3622.svc from pod dns-3622/dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e: the server could not find the requested resource (get pods dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e) +Oct 27 14:04:20.966: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3622.svc from pod dns-3622/dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e: the server could not find the requested resource (get pods dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e) +Oct 27 14:04:21.059: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3622.svc from pod dns-3622/dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e: the server could not find the requested resource (get pods dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e) +Oct 27 14:04:21.621: INFO: Lookups using dns-3622/dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3622 wheezy_tcp@dns-test-service.dns-3622 wheezy_udp@dns-test-service.dns-3622.svc wheezy_tcp@dns-test-service.dns-3622.svc wheezy_udp@_http._tcp.dns-test-service.dns-3622.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3622.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3622 jessie_tcp@dns-test-service.dns-3622 jessie_udp@dns-test-service.dns-3622.svc jessie_tcp@dns-test-service.dns-3622.svc jessie_udp@_http._tcp.dns-test-service.dns-3622.svc jessie_tcp@_http._tcp.dns-test-service.dns-3622.svc] + +Oct 27 14:04:26.718: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3622/dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e: the server could not find the requested resource (get pods dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e) +Oct 27 14:04:26.811: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3622/dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e: the server could not find the requested resource (get pods dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e) +Oct 27 14:04:26.904: INFO: Unable to read wheezy_udp@dns-test-service.dns-3622 from pod dns-3622/dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e: the server could not find the requested resource (get pods dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e) +Oct 27 14:04:26.996: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3622 from pod dns-3622/dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e: the server could not find the requested resource (get pods dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e) +Oct 27 14:04:27.089: INFO: Unable to read wheezy_udp@dns-test-service.dns-3622.svc from pod dns-3622/dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e: the server could not find the requested resource (get pods dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e) +Oct 27 14:04:27.182: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3622.svc from pod dns-3622/dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e: the server could not find the requested resource (get pods dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e) +Oct 27 14:04:27.275: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3622.svc from pod dns-3622/dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e: the server could not find the requested resource (get pods dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e) +Oct 27 14:04:27.367: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3622.svc from pod dns-3622/dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e: the server could not find the requested resource (get pods dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e) +Oct 27 14:04:28.067: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3622/dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e: the server could not find the requested resource (get pods dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e) +Oct 27 14:04:28.160: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3622/dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e: the server could not find the requested resource (get pods dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e) +Oct 27 14:04:28.253: INFO: Unable to read jessie_udp@dns-test-service.dns-3622 from pod dns-3622/dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e: the server could not find the requested resource (get pods dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e) +Oct 27 14:04:28.346: INFO: Unable to read jessie_tcp@dns-test-service.dns-3622 from pod dns-3622/dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e: the server could not find the requested resource (get pods dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e) +Oct 27 14:04:28.438: INFO: Unable to read jessie_udp@dns-test-service.dns-3622.svc from pod dns-3622/dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e: the server could not find the requested resource (get pods dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e) +Oct 27 14:04:28.531: INFO: Unable to read jessie_tcp@dns-test-service.dns-3622.svc from pod dns-3622/dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e: the server could not find the requested resource (get pods dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e) +Oct 27 14:04:28.624: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3622.svc from pod dns-3622/dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e: the server could not find the requested resource (get pods dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e) +Oct 27 14:04:28.717: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3622.svc from pod dns-3622/dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e: the server could not find the requested resource (get pods dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e) +Oct 27 14:04:29.274: INFO: Lookups using dns-3622/dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3622 wheezy_tcp@dns-test-service.dns-3622 wheezy_udp@dns-test-service.dns-3622.svc wheezy_tcp@dns-test-service.dns-3622.svc wheezy_udp@_http._tcp.dns-test-service.dns-3622.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3622.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3622 jessie_tcp@dns-test-service.dns-3622 jessie_udp@dns-test-service.dns-3622.svc jessie_tcp@dns-test-service.dns-3622.svc jessie_udp@_http._tcp.dns-test-service.dns-3622.svc jessie_tcp@_http._tcp.dns-test-service.dns-3622.svc] + +Oct 27 14:04:31.716: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3622/dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e: the server could not find the requested resource (get pods dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e) +Oct 27 14:04:31.809: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3622/dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e: the server could not find the requested resource (get pods dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e) +Oct 27 14:04:31.902: INFO: Unable to read wheezy_udp@dns-test-service.dns-3622 from pod dns-3622/dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e: the server could not find the requested resource (get pods dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e) +Oct 27 14:04:31.995: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3622 from pod dns-3622/dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e: the server could not find the requested resource (get pods dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e) +Oct 27 14:04:32.088: INFO: Unable to read wheezy_udp@dns-test-service.dns-3622.svc from pod dns-3622/dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e: the server could not find the requested resource (get pods dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e) +Oct 27 14:04:32.180: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3622.svc from pod dns-3622/dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e: the server could not find the requested resource (get pods dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e) +Oct 27 14:04:32.273: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3622.svc from pod dns-3622/dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e: the server could not find the requested resource (get pods dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e) +Oct 27 14:04:32.366: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3622.svc from pod dns-3622/dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e: the server could not find the requested resource (get pods dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e) +Oct 27 14:04:33.041: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3622/dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e: the server could not find the requested resource (get pods dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e) +Oct 27 14:04:33.134: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3622/dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e: the server could not find the requested resource (get pods dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e) +Oct 27 14:04:33.227: INFO: Unable to read jessie_udp@dns-test-service.dns-3622 from pod dns-3622/dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e: the server could not find the requested resource (get pods dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e) +Oct 27 14:04:33.319: INFO: Unable to read jessie_tcp@dns-test-service.dns-3622 from pod dns-3622/dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e: the server could not find the requested resource (get pods dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e) +Oct 27 14:04:33.412: INFO: Unable to read jessie_udp@dns-test-service.dns-3622.svc from pod dns-3622/dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e: the server could not find the requested resource (get pods dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e) +Oct 27 14:04:33.506: INFO: Unable to read jessie_tcp@dns-test-service.dns-3622.svc from pod dns-3622/dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e: the server could not find the requested resource (get pods dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e) +Oct 27 14:04:33.599: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3622.svc from pod dns-3622/dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e: the server could not find the requested resource (get pods dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e) +Oct 27 14:04:33.692: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3622.svc from pod dns-3622/dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e: the server could not find the requested resource (get pods dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e) +Oct 27 14:04:34.250: INFO: Lookups using dns-3622/dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3622 wheezy_tcp@dns-test-service.dns-3622 wheezy_udp@dns-test-service.dns-3622.svc wheezy_tcp@dns-test-service.dns-3622.svc wheezy_udp@_http._tcp.dns-test-service.dns-3622.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3622.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3622 jessie_tcp@dns-test-service.dns-3622 jessie_udp@dns-test-service.dns-3622.svc jessie_tcp@dns-test-service.dns-3622.svc jessie_udp@_http._tcp.dns-test-service.dns-3622.svc jessie_tcp@_http._tcp.dns-test-service.dns-3622.svc] + +Oct 27 14:04:36.717: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3622/dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e: the server could not find the requested resource (get pods dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e) +Oct 27 14:04:36.809: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3622/dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e: the server could not find the requested resource (get pods dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e) +Oct 27 14:04:36.903: INFO: Unable to read wheezy_udp@dns-test-service.dns-3622 from pod dns-3622/dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e: the server could not find the requested resource (get pods dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e) +Oct 27 14:04:36.995: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3622 from pod dns-3622/dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e: the server could not find the requested resource (get pods dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e) +Oct 27 14:04:37.089: INFO: Unable to read wheezy_udp@dns-test-service.dns-3622.svc from pod dns-3622/dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e: the server could not find the requested resource (get pods dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e) +Oct 27 14:04:37.301: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3622.svc from pod dns-3622/dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e: the server could not find the requested resource (get pods dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e) +Oct 27 14:04:37.504: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3622.svc from pod dns-3622/dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e: the server could not find the requested resource (get pods dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e) +Oct 27 14:04:37.603: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3622.svc from pod dns-3622/dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e: the server could not find the requested resource (get pods dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e) +Oct 27 14:04:38.253: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3622/dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e: the server could not find the requested resource (get pods dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e) +Oct 27 14:04:38.346: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3622/dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e: the server could not find the requested resource (get pods dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e) +Oct 27 14:04:38.439: INFO: Unable to read jessie_udp@dns-test-service.dns-3622 from pod dns-3622/dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e: the server could not find the requested resource (get pods dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e) +Oct 27 14:04:38.532: INFO: Unable to read jessie_tcp@dns-test-service.dns-3622 from pod dns-3622/dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e: the server could not find the requested resource (get pods dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e) +Oct 27 14:04:38.624: INFO: Unable to read jessie_udp@dns-test-service.dns-3622.svc from pod dns-3622/dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e: the server could not find the requested resource (get pods dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e) +Oct 27 14:04:38.717: INFO: Unable to read jessie_tcp@dns-test-service.dns-3622.svc from pod dns-3622/dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e: the server could not find the requested resource (get pods dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e) +Oct 27 14:04:38.810: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3622.svc from pod dns-3622/dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e: the server could not find the requested resource (get pods dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e) +Oct 27 14:04:38.905: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3622.svc from pod dns-3622/dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e: the server could not find the requested resource (get pods dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e) +Oct 27 14:04:39.463: INFO: Lookups using dns-3622/dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3622 wheezy_tcp@dns-test-service.dns-3622 wheezy_udp@dns-test-service.dns-3622.svc wheezy_tcp@dns-test-service.dns-3622.svc wheezy_udp@_http._tcp.dns-test-service.dns-3622.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3622.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3622 jessie_tcp@dns-test-service.dns-3622 jessie_udp@dns-test-service.dns-3622.svc jessie_tcp@dns-test-service.dns-3622.svc jessie_udp@_http._tcp.dns-test-service.dns-3622.svc jessie_tcp@_http._tcp.dns-test-service.dns-3622.svc] + +Oct 27 14:04:44.232: INFO: DNS probes using dns-3622/dns-test-7677916e-9c4b-415c-ab5d-082feef6c13e succeeded + +STEP: deleting the pod +STEP: deleting the test service +STEP: deleting the test headless service +[AfterEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:04:44.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-3622" for this suite. +•{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":346,"completed":20,"skipped":267,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with downward pod [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:04:44.701: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename subpath +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in subpath-4023 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] Atomic writer volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 +STEP: Setting up data +[It] should support subpaths with downward pod [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating pod pod-subpath-test-downwardapi-hfns +STEP: Creating a pod to test atomic-volume-subpath +Oct 27 14:04:45.711: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-hfns" in namespace "subpath-4023" to be "Succeeded or Failed" +Oct 27 14:04:45.802: INFO: Pod "pod-subpath-test-downwardapi-hfns": Phase="Pending", Reason="", readiness=false. Elapsed: 90.169818ms +Oct 27 14:04:47.893: INFO: Pod "pod-subpath-test-downwardapi-hfns": Phase="Running", Reason="", readiness=true. Elapsed: 2.181578337s +Oct 27 14:04:49.986: INFO: Pod "pod-subpath-test-downwardapi-hfns": Phase="Running", Reason="", readiness=true. Elapsed: 4.274781446s +Oct 27 14:04:52.078: INFO: Pod "pod-subpath-test-downwardapi-hfns": Phase="Running", Reason="", readiness=true. Elapsed: 6.366442359s +Oct 27 14:04:54.170: INFO: Pod "pod-subpath-test-downwardapi-hfns": Phase="Running", Reason="", readiness=true. Elapsed: 8.458396374s +Oct 27 14:04:56.261: INFO: Pod "pod-subpath-test-downwardapi-hfns": Phase="Running", Reason="", readiness=true. Elapsed: 10.549859847s +Oct 27 14:04:58.352: INFO: Pod "pod-subpath-test-downwardapi-hfns": Phase="Running", Reason="", readiness=true. Elapsed: 12.640474065s +Oct 27 14:05:00.443: INFO: Pod "pod-subpath-test-downwardapi-hfns": Phase="Running", Reason="", readiness=true. Elapsed: 14.731489557s +Oct 27 14:05:02.534: INFO: Pod "pod-subpath-test-downwardapi-hfns": Phase="Running", Reason="", readiness=true. Elapsed: 16.822998931s +Oct 27 14:05:04.626: INFO: Pod "pod-subpath-test-downwardapi-hfns": Phase="Running", Reason="", readiness=true. Elapsed: 18.914927999s +Oct 27 14:05:06.718: INFO: Pod "pod-subpath-test-downwardapi-hfns": Phase="Running", Reason="", readiness=true. Elapsed: 21.006507679s +Oct 27 14:05:08.810: INFO: Pod "pod-subpath-test-downwardapi-hfns": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.098152978s +STEP: Saw pod success +Oct 27 14:05:08.810: INFO: Pod "pod-subpath-test-downwardapi-hfns" satisfied condition "Succeeded or Failed" +Oct 27 14:05:08.900: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod pod-subpath-test-downwardapi-hfns container test-container-subpath-downwardapi-hfns: +STEP: delete the pod +Oct 27 14:05:09.091: INFO: Waiting for pod pod-subpath-test-downwardapi-hfns to disappear +Oct 27 14:05:09.181: INFO: Pod pod-subpath-test-downwardapi-hfns no longer exists +STEP: Deleting pod pod-subpath-test-downwardapi-hfns +Oct 27 14:05:09.181: INFO: Deleting pod "pod-subpath-test-downwardapi-hfns" in namespace "subpath-4023" +[AfterEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:05:09.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "subpath-4023" for this suite. +•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":346,"completed":21,"skipped":283,"failed":0} +SSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide container's cpu request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:05:09.542: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-8747 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should provide container's cpu request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 27 14:05:10.374: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6c738cf9-a8b0-4036-b8b0-0eef703f357f" in namespace "projected-8747" to be "Succeeded or Failed" +Oct 27 14:05:10.465: INFO: Pod "downwardapi-volume-6c738cf9-a8b0-4036-b8b0-0eef703f357f": Phase="Pending", Reason="", readiness=false. Elapsed: 90.316896ms +Oct 27 14:05:12.555: INFO: Pod "downwardapi-volume-6c738cf9-a8b0-4036-b8b0-0eef703f357f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.181113908s +STEP: Saw pod success +Oct 27 14:05:12.555: INFO: Pod "downwardapi-volume-6c738cf9-a8b0-4036-b8b0-0eef703f357f" satisfied condition "Succeeded or Failed" +Oct 27 14:05:12.646: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod downwardapi-volume-6c738cf9-a8b0-4036-b8b0-0eef703f357f container client-container: +STEP: delete the pod +Oct 27 14:05:12.874: INFO: Waiting for pod downwardapi-volume-6c738cf9-a8b0-4036-b8b0-0eef703f357f to disappear +Oct 27 14:05:12.964: INFO: Pod downwardapi-volume-6c738cf9-a8b0-4036-b8b0-0eef703f357f no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:05:12.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-8747" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":346,"completed":22,"skipped":286,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + listing mutating webhooks should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:05:13.234: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-9943 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 27 14:05:15.672: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940315, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940315, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940315, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940315, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 27 14:05:18.858: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] listing mutating webhooks should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Listing all of the created validation webhooks +STEP: Creating a configMap that should be mutated +STEP: Deleting the collection of validation webhooks +STEP: Creating a configMap that should not be mutated +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:05:20.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-9943" for this suite. +STEP: Destroying namespace "webhook-9943-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":346,"completed":23,"skipped":315,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should be able to deny attaching pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:05:21.485: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-7502 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 27 14:05:23.253: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940322, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940322, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940322, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940322, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 27 14:05:26.440: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should be able to deny attaching pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Registering the webhook via the AdmissionRegistration API +STEP: create a pod +STEP: 'kubectl attach' the pod, should be denied by the webhook +Oct 27 14:05:29.078: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=webhook-7502 attach --namespace=webhook-7502 to-be-attached-pod -i -c=container1' +Oct 27 14:05:29.731: INFO: rc: 1 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:05:29.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-7502" for this suite. +STEP: Destroying namespace "webhook-7502-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":346,"completed":24,"skipped":361,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for CRD preserving unknown fields in an embedded object [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:05:30.553: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-8604 +STEP: Waiting for a default service account to be provisioned in namespace +[It] works for CRD preserving unknown fields in an embedded object [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:05:31.287: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: client-side validation (kubectl create and apply) allows request with any unknown properties +Oct 27 14:05:36.146: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-8604 --namespace=crd-publish-openapi-8604 create -f -' +Oct 27 14:05:37.566: INFO: stderr: "" +Oct 27 14:05:37.566: INFO: stdout: "e2e-test-crd-publish-openapi-7919-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" +Oct 27 14:05:37.566: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-8604 --namespace=crd-publish-openapi-8604 delete e2e-test-crd-publish-openapi-7919-crds test-cr' +Oct 27 14:05:37.983: INFO: stderr: "" +Oct 27 14:05:37.983: INFO: stdout: "e2e-test-crd-publish-openapi-7919-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" +Oct 27 14:05:37.983: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-8604 --namespace=crd-publish-openapi-8604 apply -f -' +Oct 27 14:05:38.710: INFO: stderr: "" +Oct 27 14:05:38.710: INFO: stdout: "e2e-test-crd-publish-openapi-7919-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" +Oct 27 14:05:38.710: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-8604 --namespace=crd-publish-openapi-8604 delete e2e-test-crd-publish-openapi-7919-crds test-cr' +Oct 27 14:05:39.128: INFO: stderr: "" +Oct 27 14:05:39.128: INFO: stdout: "e2e-test-crd-publish-openapi-7919-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" +STEP: kubectl explain works to explain CR +Oct 27 14:05:39.128: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-8604 explain e2e-test-crd-publish-openapi-7919-crds' +Oct 27 14:05:39.566: INFO: stderr: "" +Oct 27 14:05:39.566: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-7919-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t<>\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:05:44.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-8604" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":346,"completed":25,"skipped":424,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should be able to deny pod and configmap creation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:05:44.588: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-2952 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 27 14:05:46.336: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940346, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940346, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940346, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940346, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 27 14:05:49.522: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should be able to deny pod and configmap creation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Registering the webhook via the AdmissionRegistration API +STEP: create a pod that should be denied by the webhook +STEP: create a pod that causes the webhook to hang +STEP: create a configmap that should be denied by the webhook +STEP: create a configmap that should be admitted by the webhook +STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook +STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook +STEP: create a namespace that bypass the webhook +STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:06:01.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-2952" for this suite. +STEP: Destroying namespace "webhook-2952-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":346,"completed":26,"skipped":493,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:06:01.785: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-2442 +STEP: Waiting for a default service account to be provisioned in namespace +[It] optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name cm-test-opt-del-405b5446-eb31-4a00-8502-f7a97dcb9177 +STEP: Creating configMap with name cm-test-opt-upd-232894e7-8643-48db-aca5-5612868148fd +STEP: Creating the pod +Oct 27 14:06:02.978: INFO: The status of Pod pod-projected-configmaps-a859b5dd-5143-4297-b3f9-aebafb2be741 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:06:05.069: INFO: The status of Pod pod-projected-configmaps-a859b5dd-5143-4297-b3f9-aebafb2be741 is Running (Ready = true) +STEP: Deleting configmap cm-test-opt-del-405b5446-eb31-4a00-8502-f7a97dcb9177 +STEP: Updating configmap cm-test-opt-upd-232894e7-8643-48db-aca5-5612868148fd +STEP: Creating configMap with name cm-test-opt-create-08951850-0e59-44b8-9806-795133e832b4 +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:07:19.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-2442" for this suite. +•{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":27,"skipped":507,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide container's memory limit [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:07:19.789: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-4075 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should provide container's memory limit [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 27 14:07:20.624: INFO: Waiting up to 5m0s for pod "downwardapi-volume-63c07758-8d5c-441b-bb19-de2fd901887a" in namespace "downward-api-4075" to be "Succeeded or Failed" +Oct 27 14:07:20.716: INFO: Pod "downwardapi-volume-63c07758-8d5c-441b-bb19-de2fd901887a": Phase="Pending", Reason="", readiness=false. Elapsed: 91.193561ms +Oct 27 14:07:22.808: INFO: Pod "downwardapi-volume-63c07758-8d5c-441b-bb19-de2fd901887a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.183265678s +STEP: Saw pod success +Oct 27 14:07:22.808: INFO: Pod "downwardapi-volume-63c07758-8d5c-441b-bb19-de2fd901887a" satisfied condition "Succeeded or Failed" +Oct 27 14:07:22.898: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod downwardapi-volume-63c07758-8d5c-441b-bb19-de2fd901887a container client-container: +STEP: delete the pod +Oct 27 14:07:23.089: INFO: Waiting for pod downwardapi-volume-63c07758-8d5c-441b-bb19-de2fd901887a to disappear +Oct 27 14:07:23.179: INFO: Pod downwardapi-volume-63c07758-8d5c-441b-bb19-de2fd901887a no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:07:23.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-4075" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":346,"completed":28,"skipped":519,"failed":0} +SS +------------------------------ +[sig-api-machinery] ResourceQuota + should verify ResourceQuota with terminating scopes. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:07:23.450: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename resourcequota +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-377 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should verify ResourceQuota with terminating scopes. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a ResourceQuota with terminating scope +STEP: Ensuring ResourceQuota status is calculated +STEP: Creating a ResourceQuota with not terminating scope +STEP: Ensuring ResourceQuota status is calculated +STEP: Creating a long running pod +STEP: Ensuring resource quota with not terminating scope captures the pod usage +STEP: Ensuring resource quota with terminating scope ignored the pod usage +STEP: Deleting the pod +STEP: Ensuring resource quota status released the pod usage +STEP: Creating a terminating pod +STEP: Ensuring resource quota with terminating scope captures the pod usage +STEP: Ensuring resource quota with not terminating scope ignored the pod usage +STEP: Deleting the pod +STEP: Ensuring resource quota status released the pod usage +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:07:41.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-377" for this suite. +•{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":346,"completed":29,"skipped":521,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should serve a basic endpoint from pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:07:41.767: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-3081 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should serve a basic endpoint from pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating service endpoint-test2 in namespace services-3081 +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3081 to expose endpoints map[] +Oct 27 14:07:42.866: INFO: successfully validated that service endpoint-test2 in namespace services-3081 exposes endpoints map[] +STEP: Creating pod pod1 in namespace services-3081 +Oct 27 14:07:43.053: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:07:45.156: INFO: The status of Pod pod1 is Running (Ready = true) +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3081 to expose endpoints map[pod1:[80]] +Oct 27 14:07:45.606: INFO: successfully validated that service endpoint-test2 in namespace services-3081 exposes endpoints map[pod1:[80]] +STEP: Checking if the Service forwards traffic to pod1 +Oct 27 14:07:45.606: INFO: Creating new exec pod +Oct 27 14:07:48.881: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-3081 exec execpodbkggb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80' +Oct 27 14:07:49.958: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 endpoint-test2 80\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n" +Oct 27 14:07:49.958: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 14:07:49.958: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-3081 exec execpodbkggb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.68.122.226 80' +Oct 27 14:07:50.994: INFO: stderr: "+ nc -v -t -w 2 100.68.122.226 80\nConnection to 100.68.122.226 80 port [tcp/http] succeeded!\n+ echo hostName\n" +Oct 27 14:07:50.994: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +STEP: Creating pod pod2 in namespace services-3081 +Oct 27 14:07:51.182: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:07:53.274: INFO: The status of Pod pod2 is Running (Ready = true) +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3081 to expose endpoints map[pod1:[80] pod2:[80]] +Oct 27 14:07:53.814: INFO: successfully validated that service endpoint-test2 in namespace services-3081 exposes endpoints map[pod1:[80] pod2:[80]] +STEP: Checking if the Service forwards traffic to pod1 and pod2 +Oct 27 14:07:54.815: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-3081 exec execpodbkggb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80' +Oct 27 14:07:55.885: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 endpoint-test2 80\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n" +Oct 27 14:07:55.885: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 14:07:55.885: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-3081 exec execpodbkggb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.68.122.226 80' +Oct 27 14:07:56.896: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 100.68.122.226 80\nConnection to 100.68.122.226 80 port [tcp/http] succeeded!\n" +Oct 27 14:07:56.896: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +STEP: Deleting pod pod1 in namespace services-3081 +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3081 to expose endpoints map[pod2:[80]] +Oct 27 14:07:57.348: INFO: successfully validated that service endpoint-test2 in namespace services-3081 exposes endpoints map[pod2:[80]] +STEP: Checking if the Service forwards traffic to pod2 +Oct 27 14:07:58.349: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-3081 exec execpodbkggb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80' +Oct 27 14:07:59.397: INFO: stderr: "+ + ncecho -v hostName\n -t -w 2 endpoint-test2 80\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n" +Oct 27 14:07:59.397: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 14:07:59.397: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-3081 exec execpodbkggb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.68.122.226 80' +Oct 27 14:08:00.464: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 100.68.122.226 80\nConnection to 100.68.122.226 80 port [tcp/http] succeeded!\n" +Oct 27 14:08:00.465: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +STEP: Deleting pod pod2 in namespace services-3081 +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3081 to expose endpoints map[] +Oct 27 14:08:00.827: INFO: successfully validated that service endpoint-test2 in namespace services-3081 exposes endpoints map[] +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:08:00.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-3081" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":346,"completed":30,"skipped":535,"failed":0} +S +------------------------------ +[sig-node] Secrets + should be consumable via the environment [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:08:01.194: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-4861 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable via the environment [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating secret secrets-4861/secret-test-4ee80ef5-e74e-4dad-acf0-b790f2973043 +STEP: Creating a pod to test consume secrets +Oct 27 14:08:02.113: INFO: Waiting up to 5m0s for pod "pod-configmaps-1e1ca85f-12d0-4c1b-86cf-f258c440af56" in namespace "secrets-4861" to be "Succeeded or Failed" +Oct 27 14:08:02.203: INFO: Pod "pod-configmaps-1e1ca85f-12d0-4c1b-86cf-f258c440af56": Phase="Pending", Reason="", readiness=false. Elapsed: 90.297392ms +Oct 27 14:08:04.295: INFO: Pod "pod-configmaps-1e1ca85f-12d0-4c1b-86cf-f258c440af56": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.182229857s +STEP: Saw pod success +Oct 27 14:08:04.295: INFO: Pod "pod-configmaps-1e1ca85f-12d0-4c1b-86cf-f258c440af56" satisfied condition "Succeeded or Failed" +Oct 27 14:08:04.385: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod pod-configmaps-1e1ca85f-12d0-4c1b-86cf-f258c440af56 container env-test: +STEP: delete the pod +Oct 27 14:08:04.576: INFO: Waiting for pod pod-configmaps-1e1ca85f-12d0-4c1b-86cf-f258c440af56 to disappear +Oct 27 14:08:04.666: INFO: Pod pod-configmaps-1e1ca85f-12d0-4c1b-86cf-f258c440af56 no longer exists +[AfterEach] [sig-node] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:08:04.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-4861" for this suite. +•{"msg":"PASSED [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":346,"completed":31,"skipped":536,"failed":0} +SSSSSSSS +------------------------------ +[sig-network] Services + should have session affinity work for NodePort service [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:08:04.937: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-765 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating service in namespace services-765 +STEP: creating service affinity-nodeport in namespace services-765 +STEP: creating replication controller affinity-nodeport in namespace services-765 +I1027 14:08:05.870964 5725 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-765, replica count: 3 +I1027 14:08:08.972611 5725 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Oct 27 14:08:09.332: INFO: Creating new exec pod +Oct 27 14:08:12.791: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-765 exec execpod-affinitysm5pw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80' +Oct 27 14:08:13.888: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport 80\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\n" +Oct 27 14:08:13.888: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 14:08:13.888: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-765 exec execpod-affinitysm5pw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.71.180.56 80' +Oct 27 14:08:14.902: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 100.71.180.56 80\nConnection to 100.71.180.56 80 port [tcp/http] succeeded!\n" +Oct 27 14:08:14.902: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 14:08:14.902: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-765 exec execpod-affinitysm5pw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.250.28.25 31755' +Oct 27 14:08:16.118: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.250.28.25 31755\nConnection to 10.250.28.25 31755 port [tcp/*] succeeded!\n" +Oct 27 14:08:16.118: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 14:08:16.118: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-765 exec execpod-affinitysm5pw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.250.9.48 31755' +Oct 27 14:08:17.196: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.250.9.48 31755\nConnection to 10.250.9.48 31755 port [tcp/*] succeeded!\n" +Oct 27 14:08:17.196: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 14:08:17.197: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-765 exec execpod-affinitysm5pw -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.250.28.25:31755/ ; done' +Oct 27 14:08:18.257: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.28.25:31755/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.28.25:31755/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.28.25:31755/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.28.25:31755/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.28.25:31755/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.28.25:31755/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.28.25:31755/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.28.25:31755/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.28.25:31755/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.28.25:31755/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.28.25:31755/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.28.25:31755/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.28.25:31755/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.28.25:31755/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.28.25:31755/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.28.25:31755/\n" +Oct 27 14:08:18.257: INFO: stdout: "\naffinity-nodeport-97927\naffinity-nodeport-97927\naffinity-nodeport-97927\naffinity-nodeport-97927\naffinity-nodeport-97927\naffinity-nodeport-97927\naffinity-nodeport-97927\naffinity-nodeport-97927\naffinity-nodeport-97927\naffinity-nodeport-97927\naffinity-nodeport-97927\naffinity-nodeport-97927\naffinity-nodeport-97927\naffinity-nodeport-97927\naffinity-nodeport-97927\naffinity-nodeport-97927" +Oct 27 14:08:18.257: INFO: Received response from host: affinity-nodeport-97927 +Oct 27 14:08:18.257: INFO: Received response from host: affinity-nodeport-97927 +Oct 27 14:08:18.257: INFO: Received response from host: affinity-nodeport-97927 +Oct 27 14:08:18.257: INFO: Received response from host: affinity-nodeport-97927 +Oct 27 14:08:18.257: INFO: Received response from host: affinity-nodeport-97927 +Oct 27 14:08:18.257: INFO: Received response from host: affinity-nodeport-97927 +Oct 27 14:08:18.257: INFO: Received response from host: affinity-nodeport-97927 +Oct 27 14:08:18.257: INFO: Received response from host: affinity-nodeport-97927 +Oct 27 14:08:18.257: INFO: Received response from host: affinity-nodeport-97927 +Oct 27 14:08:18.257: INFO: Received response from host: affinity-nodeport-97927 +Oct 27 14:08:18.257: INFO: Received response from host: affinity-nodeport-97927 +Oct 27 14:08:18.257: INFO: Received response from host: affinity-nodeport-97927 +Oct 27 14:08:18.257: INFO: Received response from host: affinity-nodeport-97927 +Oct 27 14:08:18.257: INFO: Received response from host: affinity-nodeport-97927 +Oct 27 14:08:18.257: INFO: Received response from host: affinity-nodeport-97927 +Oct 27 14:08:18.257: INFO: Received response from host: affinity-nodeport-97927 +Oct 27 14:08:18.257: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-nodeport in namespace services-765, will wait for the garbage collector to delete the pods +Oct 27 14:08:18.634: INFO: Deleting ReplicationController affinity-nodeport took: 91.069567ms +Oct 27 14:08:18.735: INFO: Terminating ReplicationController affinity-nodeport pods took: 101.042745ms +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:08:21.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-765" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":346,"completed":32,"skipped":544,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for multiple CRDs of same group and version but different kinds [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:08:21.418: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-6874 +STEP: Waiting for a default service account to be provisioned in namespace +[It] works for multiple CRDs of same group and version but different kinds [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation +Oct 27 14:08:22.150: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 14:08:26.442: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:08:44.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-6874" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":346,"completed":33,"skipped":559,"failed":0} +S +------------------------------ +[sig-apps] ReplicationController + should serve a basic image on each replica with a public image [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:08:44.818: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename replication-controller +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replication-controller-6607 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 +[It] should serve a basic image on each replica with a public image [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating replication controller my-hostname-basic-c46ae181-a693-43c4-a883-db48e6b04be3 +Oct 27 14:08:45.732: INFO: Pod name my-hostname-basic-c46ae181-a693-43c4-a883-db48e6b04be3: Found 1 pods out of 1 +Oct 27 14:08:45.732: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-c46ae181-a693-43c4-a883-db48e6b04be3" are running +Oct 27 14:08:47.913: INFO: Pod "my-hostname-basic-c46ae181-a693-43c4-a883-db48e6b04be3-dqffc" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-27 14:08:45 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-27 14:08:45 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-c46ae181-a693-43c4-a883-db48e6b04be3]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-27 14:08:45 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-c46ae181-a693-43c4-a883-db48e6b04be3]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-27 14:08:45 +0000 UTC Reason: Message:}]) +Oct 27 14:08:47.913: INFO: Trying to dial the pod +Oct 27 14:08:53.237: INFO: Controller my-hostname-basic-c46ae181-a693-43c4-a883-db48e6b04be3: Got expected result from replica 1 [my-hostname-basic-c46ae181-a693-43c4-a883-db48e6b04be3-dqffc]: "my-hostname-basic-c46ae181-a693-43c4-a883-db48e6b04be3-dqffc", 1 of 1 required successes so far +[AfterEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:08:53.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replication-controller-6607" for this suite. +•{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":346,"completed":34,"skipped":560,"failed":0} +SSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a replica set. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:08:53.514: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename resourcequota +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-6039 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should create a ResourceQuota and capture the life of a replica set. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Counting existing ResourceQuota +STEP: Creating a ResourceQuota +STEP: Ensuring resource quota status is calculated +STEP: Creating a ReplicaSet +STEP: Ensuring resource quota status captures replicaset creation +STEP: Deleting a ReplicaSet +STEP: Ensuring resource quota status released usage +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:09:05.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-6039" for this suite. +•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":346,"completed":35,"skipped":563,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-scheduling] SchedulerPreemption [Serial] + validates lower priority pod preemption by critical pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:09:06.163: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename sched-preemption +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-preemption-6831 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 +Oct 27 14:09:07.168: INFO: Waiting up to 1m0s for all nodes to be ready +Oct 27 14:10:07.992: INFO: Waiting for terminating namespaces to be deleted... +[It] validates lower priority pod preemption by critical pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Create pods that use 4/5 of node resources. +Oct 27 14:10:08.278: INFO: Created pod: pod0-0-sched-preemption-low-priority +Oct 27 14:10:08.373: INFO: Created pod: pod0-1-sched-preemption-medium-priority +Oct 27 14:10:08.566: INFO: Created pod: pod1-0-sched-preemption-medium-priority +Oct 27 14:10:08.661: INFO: Created pod: pod1-1-sched-preemption-medium-priority +STEP: Wait for pods to be scheduled. +STEP: Run a critical pod that use same resources as that of a lower priority pod +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:10:26.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-preemption-6831" for this suite. +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 +•{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":346,"completed":36,"skipped":577,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Pods + should be submitted and removed [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:10:26.859: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename pods +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-9658 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:188 +[It] should be submitted and removed [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pod +STEP: setting up watch +STEP: submitting the pod to kubernetes +Oct 27 14:10:27.771: INFO: observed the pod list +STEP: verifying the pod is in kubernetes +STEP: verifying pod creation was observed +STEP: deleting the pod gracefully +STEP: verifying pod deletion was observed +[AfterEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:10:34.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-9658" for this suite. +•{"msg":"PASSED [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]","total":346,"completed":37,"skipped":605,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should list and delete a collection of DaemonSets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:10:34.385: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename daemonsets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in daemonsets-5834 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:142 +[It] should list and delete a collection of DaemonSets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating simple DaemonSet "daemon-set" +STEP: Check that daemon pods launch on every node of the cluster. +Oct 27 14:10:35.753: INFO: Number of nodes with available pods: 0 +Oct 27 14:10:35.753: INFO: Node ip-10-250-28-25.ec2.internal is running more than one daemon pod +Oct 27 14:10:37.024: INFO: Number of nodes with available pods: 0 +Oct 27 14:10:37.024: INFO: Node ip-10-250-28-25.ec2.internal is running more than one daemon pod +Oct 27 14:10:38.024: INFO: Number of nodes with available pods: 2 +Oct 27 14:10:38.024: INFO: Number of running nodes: 2, number of available pods: 2 +STEP: listing all DeamonSets +STEP: DeleteCollection of the DaemonSets +STEP: Verify that ReplicaSets have been deleted +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:108 +Oct 27 14:10:38.572: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"8116"},"items":null} + +Oct 27 14:10:38.663: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"8116"},"items":[{"metadata":{"name":"daemon-set-bf2rw","generateName":"daemon-set-","namespace":"daemonsets-5834","uid":"2085e358-60be-44e7-bed8-7ca77d1e360e","resourceVersion":"8116","creationTimestamp":"2021-10-27T14:10:35Z","deletionTimestamp":"2021-10-27T14:11:08Z","deletionGracePeriodSeconds":30,"labels":{"controller-revision-hash":"577749b6b","daemonset-name":"daemon-set","pod-template-generation":"1"},"annotations":{"cni.projectcalico.org/containerID":"6f42e8a64311bc9a9b720827e2d6f191bacfea9d98a08b976be51ea67e450865","cni.projectcalico.org/podIP":"100.96.0.26/32","cni.projectcalico.org/podIPs":"100.96.0.26/32","kubernetes.io/psp":"e2e-test-privileged-psp"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"5ac882cf-6541-44cb-801a-8cd0d8ce5aa6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-10-27T14:10:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac882cf-6541-44cb-801a-8cd0d8ce5aa6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"calico","operation":"Update","apiVersion":"v1","time":"2021-10-27T14:10:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}},"subresource":"status"},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-10-27T14:10:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.0.26\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-qlh26","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1","ports":[{"containerPort":9376,"protocol":"TCP"}],"env":[{"name":"KUBERNETES_SERVICE_HOST","value":"api.tm94z-0j6.it.internal.staging.k8s.ondemand.com"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-qlh26","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"ip-10-250-9-48.ec2.internal","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["ip-10-250-9-48.ec2.internal"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-10-27T14:10:35Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-10-27T14:10:37Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-10-27T14:10:37Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-10-27T14:10:35Z"}],"hostIP":"10.250.9.48","podIP":"100.96.0.26","podIPs":[{"ip":"100.96.0.26"}],"startTime":"2021-10-27T14:10:35Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2021-10-27T14:10:36Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1","imageID":"docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50","containerID":"docker://d1e0e974c1a864b4877db83be39af53424922329db170b7466c9c45fc7539134","started":true}],"qosClass":"BestEffort"}},{"metadata":{"name":"daemon-set-jn96w","generateName":"daemon-set-","namespace":"daemonsets-5834","uid":"55c2e354-3691-46e3-8991-a52ad17433b9","resourceVersion":"8115","creationTimestamp":"2021-10-27T14:10:35Z","deletionTimestamp":"2021-10-27T14:11:08Z","deletionGracePeriodSeconds":30,"labels":{"controller-revision-hash":"577749b6b","daemonset-name":"daemon-set","pod-template-generation":"1"},"annotations":{"cni.projectcalico.org/containerID":"7021a6221bcc050ef6b23399f65c63f32d1e830d9589adc4a34e0e463341935d","cni.projectcalico.org/podIP":"100.96.1.49/32","cni.projectcalico.org/podIPs":"100.96.1.49/32","kubernetes.io/psp":"e2e-test-privileged-psp"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"5ac882cf-6541-44cb-801a-8cd0d8ce5aa6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-10-27T14:10:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac882cf-6541-44cb-801a-8cd0d8ce5aa6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"calico","operation":"Update","apiVersion":"v1","time":"2021-10-27T14:10:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}},"subresource":"status"},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-10-27T14:10:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.1.49\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-tzgkw","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1","ports":[{"containerPort":9376,"protocol":"TCP"}],"env":[{"name":"KUBERNETES_SERVICE_HOST","value":"api.tm94z-0j6.it.internal.staging.k8s.ondemand.com"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-tzgkw","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"ip-10-250-28-25.ec2.internal","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["ip-10-250-28-25.ec2.internal"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-10-27T14:10:35Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-10-27T14:10:37Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-10-27T14:10:37Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-10-27T14:10:35Z"}],"hostIP":"10.250.28.25","podIP":"100.96.1.49","podIPs":[{"ip":"100.96.1.49"}],"startTime":"2021-10-27T14:10:35Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2021-10-27T14:10:36Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1","imageID":"docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50","containerID":"docker://7c3fee3a485edb40fdd4e91e55a863c59128749bde2c0ef1d7ed6f32a8800ad8","started":true}],"qosClass":"BestEffort"}}]} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:10:38.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-5834" for this suite. +•{"msg":"PASSED [sig-apps] Daemon set [Serial] should list and delete a collection of DaemonSets [Conformance]","total":346,"completed":38,"skipped":660,"failed":0} +S +------------------------------ +[sig-apps] CronJob + should schedule multiple jobs concurrently [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:10:39.118: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename cronjob +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in cronjob-2063 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should schedule multiple jobs concurrently [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a cronjob +STEP: Ensuring more than one job is running at a time +STEP: Ensuring at least two running jobs exists by listing jobs explicitly +STEP: Removing cronjob +[AfterEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:12:02.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "cronjob-2063" for this suite. +•{"msg":"PASSED [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","total":346,"completed":39,"skipped":661,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + should be able to convert a non homogeneous list of CRs [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:12:02.485: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename crd-webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-webhook-1340 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 +STEP: Setting up server cert +STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication +STEP: Deploying the custom resource conversion webhook pod +STEP: Wait for the deployment to be ready +Oct 27 14:12:04.483: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940724, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940724, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940724, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940724, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-697cdbd8f4\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 27 14:12:07.670: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 +[It] should be able to convert a non homogeneous list of CRs [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:12:07.761: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Creating a v1 custom resource +STEP: Create a v2 custom resource +STEP: List CRs in v1 +STEP: List CRs in v2 +[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:12:11.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-webhook-1340" for this suite. +[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 +•{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":346,"completed":40,"skipped":692,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-node] Kubelet when scheduling a busybox Pod with hostAliases + should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:12:12.027: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubelet-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubelet-test-9999 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 +[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:12:13.101: INFO: The status of Pod busybox-host-aliasese85e9a5b-e9f8-4ff0-90b3-25bca7dfb9b4 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:12:15.192: INFO: The status of Pod busybox-host-aliasese85e9a5b-e9f8-4ff0-90b3-25bca7dfb9b4 is Running (Ready = true) +[AfterEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:12:15.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubelet-test-9999" for this suite. +•{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":41,"skipped":702,"failed":0} +S +------------------------------ +[sig-node] Downward API + should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:12:15.668: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-4744 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward api env vars +Oct 27 14:12:16.497: INFO: Waiting up to 5m0s for pod "downward-api-8e7e00d2-43af-4213-a2ac-2ea6a7e45241" in namespace "downward-api-4744" to be "Succeeded or Failed" +Oct 27 14:12:16.587: INFO: Pod "downward-api-8e7e00d2-43af-4213-a2ac-2ea6a7e45241": Phase="Pending", Reason="", readiness=false. Elapsed: 90.406423ms +Oct 27 14:12:18.678: INFO: Pod "downward-api-8e7e00d2-43af-4213-a2ac-2ea6a7e45241": Phase="Pending", Reason="", readiness=false. Elapsed: 2.181633876s +Oct 27 14:12:20.769: INFO: Pod "downward-api-8e7e00d2-43af-4213-a2ac-2ea6a7e45241": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.272673292s +STEP: Saw pod success +Oct 27 14:12:20.769: INFO: Pod "downward-api-8e7e00d2-43af-4213-a2ac-2ea6a7e45241" satisfied condition "Succeeded or Failed" +Oct 27 14:12:20.859: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod downward-api-8e7e00d2-43af-4213-a2ac-2ea6a7e45241 container dapi-container: +STEP: delete the pod +Oct 27 14:12:21.052: INFO: Waiting for pod downward-api-8e7e00d2-43af-4213-a2ac-2ea6a7e45241 to disappear +Oct 27 14:12:21.142: INFO: Pod downward-api-8e7e00d2-43af-4213-a2ac-2ea6a7e45241 no longer exists +[AfterEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:12:21.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-4744" for this suite. +•{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":346,"completed":42,"skipped":703,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] InitContainer [NodeConformance] + should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:12:21.413: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename init-container +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in init-container-7626 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 +[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pod +Oct 27 14:12:22.145: INFO: PodSpec: initContainers in spec.initContainers +[AfterEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:12:26.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "init-container-7626" for this suite. +•{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":346,"completed":43,"skipped":734,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:12:26.825: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-475 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating service in namespace services-475 +Oct 27 14:12:27.744: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:12:29.835: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true) +Oct 27 14:12:29.926: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-475 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' +Oct 27 14:12:30.988: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" +Oct 27 14:12:30.988: INFO: stdout: "iptables" +Oct 27 14:12:30.988: INFO: proxyMode: iptables +Oct 27 14:12:31.083: INFO: Waiting for pod kube-proxy-mode-detector to disappear +Oct 27 14:12:31.173: INFO: Pod kube-proxy-mode-detector no longer exists +STEP: creating service affinity-clusterip-timeout in namespace services-475 +STEP: creating replication controller affinity-clusterip-timeout in namespace services-475 +I1027 14:12:31.359001 5725 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-475, replica count: 3 +I1027 14:12:34.510385 5725 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Oct 27 14:12:34.690: INFO: Creating new exec pod +Oct 27 14:12:37.966: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-475 exec execpod-affinityktl9j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80' +Oct 27 14:12:39.051: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-timeout 80\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\n" +Oct 27 14:12:39.051: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 14:12:39.051: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-475 exec execpod-affinityktl9j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.70.147.193 80' +Oct 27 14:12:40.116: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 100.70.147.193 80\nConnection to 100.70.147.193 80 port [tcp/http] succeeded!\n" +Oct 27 14:12:40.116: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 14:12:40.116: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-475 exec execpod-affinityktl9j -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://100.70.147.193:80/ ; done' +Oct 27 14:12:41.267: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.70.147.193:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.70.147.193:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.70.147.193:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.70.147.193:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.70.147.193:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.70.147.193:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.70.147.193:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.70.147.193:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.70.147.193:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.70.147.193:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.70.147.193:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.70.147.193:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.70.147.193:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.70.147.193:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.70.147.193:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.70.147.193:80/\n" +Oct 27 14:12:41.267: INFO: stdout: "\naffinity-clusterip-timeout-wvcxd\naffinity-clusterip-timeout-wvcxd\naffinity-clusterip-timeout-wvcxd\naffinity-clusterip-timeout-wvcxd\naffinity-clusterip-timeout-wvcxd\naffinity-clusterip-timeout-wvcxd\naffinity-clusterip-timeout-wvcxd\naffinity-clusterip-timeout-wvcxd\naffinity-clusterip-timeout-wvcxd\naffinity-clusterip-timeout-wvcxd\naffinity-clusterip-timeout-wvcxd\naffinity-clusterip-timeout-wvcxd\naffinity-clusterip-timeout-wvcxd\naffinity-clusterip-timeout-wvcxd\naffinity-clusterip-timeout-wvcxd\naffinity-clusterip-timeout-wvcxd" +Oct 27 14:12:41.267: INFO: Received response from host: affinity-clusterip-timeout-wvcxd +Oct 27 14:12:41.267: INFO: Received response from host: affinity-clusterip-timeout-wvcxd +Oct 27 14:12:41.267: INFO: Received response from host: affinity-clusterip-timeout-wvcxd +Oct 27 14:12:41.267: INFO: Received response from host: affinity-clusterip-timeout-wvcxd +Oct 27 14:12:41.267: INFO: Received response from host: affinity-clusterip-timeout-wvcxd +Oct 27 14:12:41.267: INFO: Received response from host: affinity-clusterip-timeout-wvcxd +Oct 27 14:12:41.267: INFO: Received response from host: affinity-clusterip-timeout-wvcxd +Oct 27 14:12:41.267: INFO: Received response from host: affinity-clusterip-timeout-wvcxd +Oct 27 14:12:41.267: INFO: Received response from host: affinity-clusterip-timeout-wvcxd +Oct 27 14:12:41.267: INFO: Received response from host: affinity-clusterip-timeout-wvcxd +Oct 27 14:12:41.267: INFO: Received response from host: affinity-clusterip-timeout-wvcxd +Oct 27 14:12:41.267: INFO: Received response from host: affinity-clusterip-timeout-wvcxd +Oct 27 14:12:41.267: INFO: Received response from host: affinity-clusterip-timeout-wvcxd +Oct 27 14:12:41.267: INFO: Received response from host: affinity-clusterip-timeout-wvcxd +Oct 27 14:12:41.267: INFO: Received response from host: affinity-clusterip-timeout-wvcxd +Oct 27 14:12:41.267: INFO: Received response from host: affinity-clusterip-timeout-wvcxd +Oct 27 14:12:41.267: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-475 exec execpod-affinityktl9j -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://100.70.147.193:80/' +Oct 27 14:12:42.327: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://100.70.147.193:80/\n" +Oct 27 14:12:42.327: INFO: stdout: "affinity-clusterip-timeout-wvcxd" +Oct 27 14:13:02.329: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-475 exec execpod-affinityktl9j -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://100.70.147.193:80/' +Oct 27 14:13:03.366: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://100.70.147.193:80/\n" +Oct 27 14:13:03.366: INFO: stdout: "affinity-clusterip-timeout-wvcxd" +Oct 27 14:13:23.366: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-475 exec execpod-affinityktl9j -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://100.70.147.193:80/' +Oct 27 14:13:24.457: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://100.70.147.193:80/\n" +Oct 27 14:13:24.458: INFO: stdout: "affinity-clusterip-timeout-hdqsq" +Oct 27 14:13:24.458: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-475, will wait for the garbage collector to delete the pods +Oct 27 14:13:24.834: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 91.216578ms +Oct 27 14:13:24.935: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 100.693352ms +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:13:27.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-475" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":346,"completed":44,"skipped":762,"failed":0} +SSSSS +------------------------------ +[sig-apps] Deployment + should validate Deployment Status endpoints [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:13:27.213: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename deployment +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in deployment-4328 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 +[It] should validate Deployment Status endpoints [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a Deployment +Oct 27 14:13:28.036: INFO: Creating simple deployment test-deployment-ht284 +Oct 27 14:13:28.398: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940808, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940808, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940808, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940808, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-deployment-ht284-794dd694d8\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Getting /status +Oct 27 14:13:30.670: INFO: Deployment test-deployment-ht284 has Conditions: [{Available True 2021-10-27 14:13:29 +0000 UTC 2021-10-27 14:13:29 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2021-10-27 14:13:29 +0000 UTC 2021-10-27 14:13:28 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-ht284-794dd694d8" has successfully progressed.}] +STEP: updating Deployment Status +Oct 27 14:13:30.852: INFO: updatedStatus.Conditions: []v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940809, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940809, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940809, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940808, loc:(*time.Location)(0xa09bc80)}}, Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"test-deployment-ht284-794dd694d8\" has successfully progressed."}, v1.DeploymentCondition{Type:"StatusUpdate", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"E2E", Message:"Set from e2e test"}} +STEP: watching for the Deployment status to be updated +Oct 27 14:13:30.943: INFO: Observed &Deployment event: ADDED +Oct 27 14:13:30.943: INFO: Observed Deployment test-deployment-ht284 in namespace deployment-4328 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2021-10-27 14:13:28 +0000 UTC 2021-10-27 14:13:28 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-ht284-794dd694d8"} +Oct 27 14:13:30.943: INFO: Observed &Deployment event: MODIFIED +Oct 27 14:13:30.943: INFO: Observed Deployment test-deployment-ht284 in namespace deployment-4328 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2021-10-27 14:13:28 +0000 UTC 2021-10-27 14:13:28 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-ht284-794dd694d8"} +Oct 27 14:13:30.943: INFO: Observed Deployment test-deployment-ht284 in namespace deployment-4328 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2021-10-27 14:13:28 +0000 UTC 2021-10-27 14:13:28 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} +Oct 27 14:13:30.943: INFO: Observed &Deployment event: MODIFIED +Oct 27 14:13:30.943: INFO: Observed Deployment test-deployment-ht284 in namespace deployment-4328 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2021-10-27 14:13:28 +0000 UTC 2021-10-27 14:13:28 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} +Oct 27 14:13:30.943: INFO: Observed Deployment test-deployment-ht284 in namespace deployment-4328 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2021-10-27 14:13:28 +0000 UTC 2021-10-27 14:13:28 +0000 UTC ReplicaSetUpdated ReplicaSet "test-deployment-ht284-794dd694d8" is progressing.} +Oct 27 14:13:30.943: INFO: Observed &Deployment event: MODIFIED +Oct 27 14:13:30.943: INFO: Observed Deployment test-deployment-ht284 in namespace deployment-4328 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2021-10-27 14:13:29 +0000 UTC 2021-10-27 14:13:29 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} +Oct 27 14:13:30.943: INFO: Observed Deployment test-deployment-ht284 in namespace deployment-4328 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2021-10-27 14:13:29 +0000 UTC 2021-10-27 14:13:28 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-ht284-794dd694d8" has successfully progressed.} +Oct 27 14:13:30.943: INFO: Observed &Deployment event: MODIFIED +Oct 27 14:13:30.944: INFO: Observed Deployment test-deployment-ht284 in namespace deployment-4328 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2021-10-27 14:13:29 +0000 UTC 2021-10-27 14:13:29 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} +Oct 27 14:13:30.944: INFO: Observed Deployment test-deployment-ht284 in namespace deployment-4328 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2021-10-27 14:13:29 +0000 UTC 2021-10-27 14:13:28 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-ht284-794dd694d8" has successfully progressed.} +Oct 27 14:13:30.944: INFO: Found Deployment test-deployment-ht284 in namespace deployment-4328 with labels: map[e2e:testing name:httpd] annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} +Oct 27 14:13:30.944: INFO: Deployment test-deployment-ht284 has an updated status +STEP: patching the Statefulset Status +Oct 27 14:13:30.944: INFO: Patch payload: {"status":{"conditions":[{"type":"StatusPatched","status":"True"}]}} +Oct 27 14:13:31.035: INFO: Patched status conditions: []v1.DeploymentCondition{v1.DeploymentCondition{Type:"StatusPatched", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"", Message:""}} +STEP: watching for the Deployment status to be patched +Oct 27 14:13:31.125: INFO: Observed &Deployment event: ADDED +Oct 27 14:13:31.125: INFO: Observed deployment test-deployment-ht284 in namespace deployment-4328 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2021-10-27 14:13:28 +0000 UTC 2021-10-27 14:13:28 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-ht284-794dd694d8"} +Oct 27 14:13:31.125: INFO: Observed &Deployment event: MODIFIED +Oct 27 14:13:31.125: INFO: Observed deployment test-deployment-ht284 in namespace deployment-4328 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2021-10-27 14:13:28 +0000 UTC 2021-10-27 14:13:28 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-ht284-794dd694d8"} +Oct 27 14:13:31.125: INFO: Observed deployment test-deployment-ht284 in namespace deployment-4328 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2021-10-27 14:13:28 +0000 UTC 2021-10-27 14:13:28 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} +Oct 27 14:13:31.125: INFO: Observed &Deployment event: MODIFIED +Oct 27 14:13:31.125: INFO: Observed deployment test-deployment-ht284 in namespace deployment-4328 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2021-10-27 14:13:28 +0000 UTC 2021-10-27 14:13:28 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} +Oct 27 14:13:31.125: INFO: Observed deployment test-deployment-ht284 in namespace deployment-4328 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2021-10-27 14:13:28 +0000 UTC 2021-10-27 14:13:28 +0000 UTC ReplicaSetUpdated ReplicaSet "test-deployment-ht284-794dd694d8" is progressing.} +Oct 27 14:13:31.126: INFO: Observed &Deployment event: MODIFIED +Oct 27 14:13:31.126: INFO: Observed deployment test-deployment-ht284 in namespace deployment-4328 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2021-10-27 14:13:29 +0000 UTC 2021-10-27 14:13:29 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} +Oct 27 14:13:31.126: INFO: Observed deployment test-deployment-ht284 in namespace deployment-4328 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2021-10-27 14:13:29 +0000 UTC 2021-10-27 14:13:28 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-ht284-794dd694d8" has successfully progressed.} +Oct 27 14:13:31.126: INFO: Observed &Deployment event: MODIFIED +Oct 27 14:13:31.126: INFO: Observed deployment test-deployment-ht284 in namespace deployment-4328 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2021-10-27 14:13:29 +0000 UTC 2021-10-27 14:13:29 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} +Oct 27 14:13:31.126: INFO: Observed deployment test-deployment-ht284 in namespace deployment-4328 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2021-10-27 14:13:29 +0000 UTC 2021-10-27 14:13:28 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-ht284-794dd694d8" has successfully progressed.} +Oct 27 14:13:31.126: INFO: Observed deployment test-deployment-ht284 in namespace deployment-4328 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} +Oct 27 14:13:31.126: INFO: Observed &Deployment event: MODIFIED +Oct 27 14:13:31.126: INFO: Found deployment test-deployment-ht284 in namespace deployment-4328 with labels: map[e2e:testing name:httpd] annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusPatched True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } +Oct 27 14:13:31.126: INFO: Deployment test-deployment-ht284 has a patched status +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 +Oct 27 14:13:31.217: INFO: Deployment "test-deployment-ht284": +&Deployment{ObjectMeta:{test-deployment-ht284 deployment-4328 e850f29d-80fb-44b1-9a4d-f5bf9cf37a7a 9222 1 2021-10-27 14:13:28 +0000 UTC map[e2e:testing name:httpd] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2021-10-27 14:13:28 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {e2e.test Update apps/v1 2021-10-27 14:13:30 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"StatusPatched\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update apps/v1 2021-10-27 14:13:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{e2e: testing,name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[e2e:testing name:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0081160c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:StatusPatched,Status:True,Reason:,Message:,LastUpdateTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,},DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-10-27 14:13:31 +0000 UTC,LastTransitionTime:2021-10-27 14:13:31 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-deployment-ht284-794dd694d8" has successfully progressed.,LastUpdateTime:2021-10-27 14:13:31 +0000 UTC,LastTransitionTime:2021-10-27 14:13:30 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} + +Oct 27 14:13:31.307: INFO: New ReplicaSet "test-deployment-ht284-794dd694d8" of Deployment "test-deployment-ht284": +&ReplicaSet{ObjectMeta:{test-deployment-ht284-794dd694d8 deployment-4328 f9d11120-4c46-4fe0-8801-af8d13202e9d 9210 1 2021-10-27 14:13:28 +0000 UTC map[e2e:testing name:httpd pod-template-hash:794dd694d8] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-deployment-ht284 e850f29d-80fb-44b1-9a4d-f5bf9cf37a7a 0xc0081164d7 0xc0081164d8}] [] [{kube-controller-manager Update apps/v1 2021-10-27 14:13:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e850f29d-80fb-44b1-9a4d-f5bf9cf37a7a\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 14:13:29 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{e2e: testing,name: httpd,pod-template-hash: 794dd694d8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[e2e:testing name:httpd pod-template-hash:794dd694d8] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc008116588 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} +Oct 27 14:13:31.398: INFO: Pod "test-deployment-ht284-794dd694d8-j48bj" is available: +&Pod{ObjectMeta:{test-deployment-ht284-794dd694d8-j48bj test-deployment-ht284-794dd694d8- deployment-4328 42b16978-87a0-4afc-b9d7-b3bb041f1a36 9209 0 2021-10-27 14:13:28 +0000 UTC map[e2e:testing name:httpd pod-template-hash:794dd694d8] map[cni.projectcalico.org/containerID:a2ba15b5b8cc8fddfe5a3bc591f67cfca673fa6f91aee4aab78c47e399e401e7 cni.projectcalico.org/podIP:100.96.1.59/32 cni.projectcalico.org/podIPs:100.96.1.59/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet test-deployment-ht284-794dd694d8 f9d11120-4c46-4fe0-8801-af8d13202e9d 0xc008116937 0xc008116938}] [] [{kube-controller-manager Update v1 2021-10-27 14:13:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f9d11120-4c46-4fe0-8801-af8d13202e9d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-27 14:13:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-27 14:13:29 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.1.59\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-5g95d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tm94z-0j6.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5g95d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-28-25.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:13:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:13:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:13:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:13:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.28.25,PodIP:100.96.1.59,StartTime:2021-10-27 14:13:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-27 14:13:29 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://0b54adbaee0b6b7e861c6fd8bf97e47e52307f9b779b51d151623e20bd77f0a8,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.1.59,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:13:31.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-4328" for this suite. +•{"msg":"PASSED [sig-apps] Deployment should validate Deployment Status endpoints [Conformance]","total":346,"completed":45,"skipped":767,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:13:31.580: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-3795 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0777 on tmpfs +Oct 27 14:13:32.408: INFO: Waiting up to 5m0s for pod "pod-92bb350f-8fff-43b5-9a8a-30b290c81108" in namespace "emptydir-3795" to be "Succeeded or Failed" +Oct 27 14:13:32.498: INFO: Pod "pod-92bb350f-8fff-43b5-9a8a-30b290c81108": Phase="Pending", Reason="", readiness=false. Elapsed: 90.04797ms +Oct 27 14:13:34.589: INFO: Pod "pod-92bb350f-8fff-43b5-9a8a-30b290c81108": Phase="Pending", Reason="", readiness=false. Elapsed: 2.181227173s +Oct 27 14:13:36.680: INFO: Pod "pod-92bb350f-8fff-43b5-9a8a-30b290c81108": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.27262654s +STEP: Saw pod success +Oct 27 14:13:36.680: INFO: Pod "pod-92bb350f-8fff-43b5-9a8a-30b290c81108" satisfied condition "Succeeded or Failed" +Oct 27 14:13:36.771: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod pod-92bb350f-8fff-43b5-9a8a-30b290c81108 container test-container: +STEP: delete the pod +Oct 27 14:13:37.004: INFO: Waiting for pod pod-92bb350f-8fff-43b5-9a8a-30b290c81108 to disappear +Oct 27 14:13:37.094: INFO: Pod pod-92bb350f-8fff-43b5-9a8a-30b290c81108 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:13:37.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-3795" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":46,"skipped":798,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + Should recreate evicted statefulset [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:13:37.365: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename statefulset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-7602 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:92 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:107 +STEP: Creating service test in namespace statefulset-7602 +[It] Should recreate evicted statefulset [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Looking for a node to schedule stateful set and pod +STEP: Creating pod with conflicting port in namespace statefulset-7602 +STEP: Waiting until pod test-pod will start running in namespace statefulset-7602 +STEP: Creating statefulset with conflicting port in namespace statefulset-7602 +STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-7602 +Oct 27 14:13:41.000: INFO: Observed stateful pod in namespace: statefulset-7602, name: ss-0, uid: ee15e6b6-61a6-4bc7-8bf2-9bc71630b07a, status phase: Pending. Waiting for statefulset controller to delete. +Oct 27 14:13:41.001: INFO: Observed stateful pod in namespace: statefulset-7602, name: ss-0, uid: ee15e6b6-61a6-4bc7-8bf2-9bc71630b07a, status phase: Failed. Waiting for statefulset controller to delete. +Oct 27 14:13:41.001: INFO: Observed stateful pod in namespace: statefulset-7602, name: ss-0, uid: ee15e6b6-61a6-4bc7-8bf2-9bc71630b07a, status phase: Failed. Waiting for statefulset controller to delete. +Oct 27 14:13:41.001: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-7602 +STEP: Removing pod with conflicting port in namespace statefulset-7602 +STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-7602 and will be in running state +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118 +Oct 27 14:13:43.285: INFO: Deleting all statefulset in ns statefulset-7602 +Oct 27 14:13:43.375: INFO: Scaling statefulset ss to 0 +Oct 27 14:13:53.737: INFO: Waiting for statefulset status.replicas updated to 0 +Oct 27 14:13:53.827: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:13:54.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-7602" for this suite. +•{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":346,"completed":47,"skipped":829,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + should perform rolling updates and roll backs of template modifications [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:13:54.386: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename statefulset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-6874 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:92 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:107 +STEP: Creating service test in namespace statefulset-6874 +[It] should perform rolling updates and roll backs of template modifications [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a new StatefulSet +Oct 27 14:13:55.391: INFO: Found 1 stateful pods, waiting for 3 +Oct 27 14:14:05.484: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true +Oct 27 14:14:05.484: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true +Oct 27 14:14:05.484: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true +Oct 27 14:14:05.756: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-6874 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Oct 27 14:14:06.888: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Oct 27 14:14:06.888: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Oct 27 14:14:06.888: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +STEP: Updating StatefulSet template: update image from k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 to k8s.gcr.io/e2e-test-images/httpd:2.4.39-1 +Oct 27 14:14:17.442: INFO: Updating stateful set ss2 +STEP: Creating a new revision +STEP: Updating Pods in reverse ordinal order +Oct 27 14:14:17.713: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-6874 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 14:14:18.908: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Oct 27 14:14:18.908: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Oct 27 14:14:18.908: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +STEP: Rolling back to a previous revision +Oct 27 14:14:29.465: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-6874 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Oct 27 14:14:30.509: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Oct 27 14:14:30.509: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Oct 27 14:14:30.509: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Oct 27 14:14:41.065: INFO: Updating stateful set ss2 +STEP: Rolling back update in reverse ordinal order +Oct 27 14:14:41.337: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-6874 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 14:14:42.375: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Oct 27 14:14:42.375: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Oct 27 14:14:42.375: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Oct 27 14:14:42.739: INFO: Waiting for StatefulSet statefulset-6874/ss2 to complete update +Oct 27 14:14:42.739: INFO: Waiting for Pod statefulset-6874/ss2-0 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 +Oct 27 14:14:42.739: INFO: Waiting for Pod statefulset-6874/ss2-1 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 +Oct 27 14:14:42.739: INFO: Waiting for Pod statefulset-6874/ss2-2 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118 +Oct 27 14:14:52.921: INFO: Deleting all statefulset in ns statefulset-6874 +Oct 27 14:14:53.011: INFO: Scaling statefulset ss2 to 0 +Oct 27 14:15:03.375: INFO: Waiting for statefulset status.replicas updated to 0 +Oct 27 14:15:03.466: INFO: Deleting statefulset ss2 +[AfterEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:15:03.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-6874" for this suite. +•{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":346,"completed":48,"skipped":843,"failed":0} +SSSS +------------------------------ +[sig-apps] DisruptionController + should create a PodDisruptionBudget [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:15:04.008: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename disruption +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in disruption-8462 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 +[It] should create a PodDisruptionBudget [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pdb +STEP: Waiting for the pdb to be processed +STEP: updating the pdb +STEP: Waiting for the pdb to be processed +STEP: patching the pdb +STEP: Waiting for the pdb to be processed +STEP: Waiting for the pdb to be deleted +[AfterEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:15:05.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "disruption-8462" for this suite. +•{"msg":"PASSED [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]","total":346,"completed":49,"skipped":847,"failed":0} +S +------------------------------ +[sig-storage] Secrets + should be immutable if `immutable` field is set [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:15:05.832: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-7500 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be immutable if `immutable` field is set [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:15:07.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-7500" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]","total":346,"completed":50,"skipped":848,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:15:07.565: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-5373 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0666 on node default medium +Oct 27 14:15:08.396: INFO: Waiting up to 5m0s for pod "pod-eae3b743-f660-44ad-b4fd-f077f73ee7b2" in namespace "emptydir-5373" to be "Succeeded or Failed" +Oct 27 14:15:08.488: INFO: Pod "pod-eae3b743-f660-44ad-b4fd-f077f73ee7b2": Phase="Pending", Reason="", readiness=false. Elapsed: 92.32192ms +Oct 27 14:15:10.579: INFO: Pod "pod-eae3b743-f660-44ad-b4fd-f077f73ee7b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.183690322s +STEP: Saw pod success +Oct 27 14:15:10.579: INFO: Pod "pod-eae3b743-f660-44ad-b4fd-f077f73ee7b2" satisfied condition "Succeeded or Failed" +Oct 27 14:15:10.670: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod pod-eae3b743-f660-44ad-b4fd-f077f73ee7b2 container test-container: +STEP: delete the pod +Oct 27 14:15:10.871: INFO: Waiting for pod pod-eae3b743-f660-44ad-b4fd-f077f73ee7b2 to disappear +Oct 27 14:15:10.960: INFO: Pod pod-eae3b743-f660-44ad-b4fd-f077f73ee7b2 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:15:10.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-5373" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":51,"skipped":861,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:15:11.231: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-4438 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 27 14:15:12.064: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9ee567d7-d420-47a1-8104-f0077d815bea" in namespace "downward-api-4438" to be "Succeeded or Failed" +Oct 27 14:15:12.154: INFO: Pod "downwardapi-volume-9ee567d7-d420-47a1-8104-f0077d815bea": Phase="Pending", Reason="", readiness=false. Elapsed: 90.206442ms +Oct 27 14:15:14.246: INFO: Pod "downwardapi-volume-9ee567d7-d420-47a1-8104-f0077d815bea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.181848406s +STEP: Saw pod success +Oct 27 14:15:14.246: INFO: Pod "downwardapi-volume-9ee567d7-d420-47a1-8104-f0077d815bea" satisfied condition "Succeeded or Failed" +Oct 27 14:15:14.336: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod downwardapi-volume-9ee567d7-d420-47a1-8104-f0077d815bea container client-container: +STEP: delete the pod +Oct 27 14:15:14.567: INFO: Waiting for pod downwardapi-volume-9ee567d7-d420-47a1-8104-f0077d815bea to disappear +Oct 27 14:15:14.657: INFO: Pod downwardapi-volume-9ee567d7-d420-47a1-8104-f0077d815bea no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:15:14.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-4438" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":346,"completed":52,"skipped":884,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should be able to change the type from ClusterIP to ExternalName [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:15:14.928: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-7625 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should be able to change the type from ClusterIP to ExternalName [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-7625 +STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service +STEP: creating service externalsvc in namespace services-7625 +STEP: creating replication controller externalsvc in namespace services-7625 +I1027 14:15:15.941588 5725 runners.go:190] Created replication controller with name: externalsvc, namespace: services-7625, replica count: 2 +I1027 14:15:19.043559 5725 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +STEP: changing the ClusterIP service to type=ExternalName +Oct 27 14:15:19.405: INFO: Creating new exec pod +Oct 27 14:15:21.681: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-7625 exec execpodqbflm -- /bin/sh -x -c nslookup clusterip-service.services-7625.svc.cluster.local' +Oct 27 14:15:22.786: INFO: stderr: "+ nslookup clusterip-service.services-7625.svc.cluster.local\n" +Oct 27 14:15:22.786: INFO: stdout: "Server:\t\t100.64.0.10\nAddress:\t100.64.0.10#53\n\nclusterip-service.services-7625.svc.cluster.local\tcanonical name = externalsvc.services-7625.svc.cluster.local.\nName:\texternalsvc.services-7625.svc.cluster.local\nAddress: 100.66.186.48\n\n" +STEP: deleting ReplicationController externalsvc in namespace services-7625, will wait for the garbage collector to delete the pods +Oct 27 14:15:23.069: INFO: Deleting ReplicationController externalsvc took: 91.716132ms +Oct 27 14:15:23.170: INFO: Terminating ReplicationController externalsvc pods took: 101.12443ms +Oct 27 14:15:25.066: INFO: Cleaning up the ClusterIP to ExternalName test service +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:15:25.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-7625" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":346,"completed":53,"skipped":911,"failed":0} +SSSS +------------------------------ +[sig-storage] Downward API volume + should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:15:25.344: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-979 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 27 14:15:26.175: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0d4690bb-512c-434f-9b0c-e49672a37f32" in namespace "downward-api-979" to be "Succeeded or Failed" +Oct 27 14:15:26.265: INFO: Pod "downwardapi-volume-0d4690bb-512c-434f-9b0c-e49672a37f32": Phase="Pending", Reason="", readiness=false. Elapsed: 90.218761ms +Oct 27 14:15:28.356: INFO: Pod "downwardapi-volume-0d4690bb-512c-434f-9b0c-e49672a37f32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.181046697s +STEP: Saw pod success +Oct 27 14:15:28.356: INFO: Pod "downwardapi-volume-0d4690bb-512c-434f-9b0c-e49672a37f32" satisfied condition "Succeeded or Failed" +Oct 27 14:15:28.446: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod downwardapi-volume-0d4690bb-512c-434f-9b0c-e49672a37f32 container client-container: +STEP: delete the pod +Oct 27 14:15:28.637: INFO: Waiting for pod downwardapi-volume-0d4690bb-512c-434f-9b0c-e49672a37f32 to disappear +Oct 27 14:15:28.727: INFO: Pod downwardapi-volume-0d4690bb-512c-434f-9b0c-e49672a37f32 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:15:28.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-979" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":346,"completed":54,"skipped":915,"failed":0} +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Container Runtime blackbox test on terminated container + should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:15:28.998: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-runtime +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-6246 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the container +STEP: wait for the container to reach Failed +STEP: get the container status +STEP: the container should be terminated +STEP: the termination message should be set +Oct 27 14:15:32.188: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- +STEP: delete the container +[AfterEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:15:32.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-runtime-6246" for this suite. +•{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":346,"completed":55,"skipped":932,"failed":0} +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Downward API + should provide host IP as an env var [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:15:32.643: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-8209 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide host IP as an env var [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward api env vars +Oct 27 14:15:33.473: INFO: Waiting up to 5m0s for pod "downward-api-25b7e05a-06b2-462f-a2e8-23a5faca4981" in namespace "downward-api-8209" to be "Succeeded or Failed" +Oct 27 14:15:33.564: INFO: Pod "downward-api-25b7e05a-06b2-462f-a2e8-23a5faca4981": Phase="Pending", Reason="", readiness=false. Elapsed: 90.406636ms +Oct 27 14:15:35.655: INFO: Pod "downward-api-25b7e05a-06b2-462f-a2e8-23a5faca4981": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.181560618s +STEP: Saw pod success +Oct 27 14:15:35.655: INFO: Pod "downward-api-25b7e05a-06b2-462f-a2e8-23a5faca4981" satisfied condition "Succeeded or Failed" +Oct 27 14:15:35.745: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod downward-api-25b7e05a-06b2-462f-a2e8-23a5faca4981 container dapi-container: +STEP: delete the pod +Oct 27 14:15:35.937: INFO: Waiting for pod downward-api-25b7e05a-06b2-462f-a2e8-23a5faca4981 to disappear +Oct 27 14:15:36.027: INFO: Pod downward-api-25b7e05a-06b2-462f-a2e8-23a5faca4981 no longer exists +[AfterEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:15:36.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-8209" for this suite. +•{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":346,"completed":56,"skipped":950,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-node] Container Runtime blackbox test when starting a container that exits + should run with the expected status [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:15:36.298: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-runtime +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-5199 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should run with the expected status [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' +STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' +STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition +STEP: Container 'terminate-cmd-rpa': should get the expected 'State' +STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] +STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' +STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' +STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition +STEP: Container 'terminate-cmd-rpof': should get the expected 'State' +STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] +STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' +STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' +STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition +STEP: Container 'terminate-cmd-rpn': should get the expected 'State' +STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] +[AfterEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:16:02.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-runtime-5199" for this suite. +•{"msg":"PASSED [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":346,"completed":57,"skipped":963,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should update labels on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:16:02.425: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-2147 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should update labels on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating the pod +Oct 27 14:16:03.345: INFO: The status of Pod labelsupdatebaff0191-db0a-400e-8e8b-17246534705b is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:16:05.436: INFO: The status of Pod labelsupdatebaff0191-db0a-400e-8e8b-17246534705b is Running (Ready = true) +Oct 27 14:16:06.308: INFO: Successfully updated pod "labelsupdatebaff0191-db0a-400e-8e8b-17246534705b" +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:16:08.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-2147" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":346,"completed":58,"skipped":974,"failed":0} +SSSSS +------------------------------ +[sig-api-machinery] server version + should find the server version [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] server version + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:16:08.774: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename server-version +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in server-version-1379 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should find the server version [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Request ServerVersion +STEP: Confirm major version +Oct 27 14:16:09.595: INFO: Major version: 1 +STEP: Confirm minor version +Oct 27 14:16:09.595: INFO: cleanMinorVersion: 22 +Oct 27 14:16:09.595: INFO: Minor version: 22 +[AfterEach] [sig-api-machinery] server version + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:16:09.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "server-version-1379" for this suite. +•{"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":346,"completed":59,"skipped":979,"failed":0} +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Deployment + RollingUpdateDeployment should delete old pods and create new ones [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:16:09.778: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename deployment +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in deployment-3685 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 +[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:16:10.511: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) +Oct 27 14:16:10.695: INFO: Pod name sample-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running +Oct 27 14:16:12.877: INFO: Creating deployment "test-rolling-update-deployment" +Oct 27 14:16:12.968: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has +Oct 27 14:16:13.149: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected +Oct 27 14:16:13.239: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940972, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940972, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940972, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940972, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-585b757574\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 27 14:16:15.330: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 +Oct 27 14:16:15.604: INFO: Deployment "test-rolling-update-deployment": +&Deployment{ObjectMeta:{test-rolling-update-deployment deployment-3685 30a3f122-f25b-4a7d-9a3b-efacb300d757 10703 1 2021-10-27 14:16:12 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2021-10-27 14:16:12 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 14:16:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003b40868 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-10-27 14:16:12 +0000 UTC,LastTransitionTime:2021-10-27 14:16:12 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-585b757574" has successfully progressed.,LastUpdateTime:2021-10-27 14:16:14 +0000 UTC,LastTransitionTime:2021-10-27 14:16:12 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} + +Oct 27 14:16:15.695: INFO: New ReplicaSet "test-rolling-update-deployment-585b757574" of Deployment "test-rolling-update-deployment": +&ReplicaSet{ObjectMeta:{test-rolling-update-deployment-585b757574 deployment-3685 05d407bc-abe5-46a1-a492-15e4ce3f83d7 10696 1 2021-10-27 14:16:12 +0000 UTC map[name:sample-pod pod-template-hash:585b757574] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 30a3f122-f25b-4a7d-9a3b-efacb300d757 0xc003b40d57 0xc003b40d58}] [] [{kube-controller-manager Update apps/v1 2021-10-27 14:16:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"30a3f122-f25b-4a7d-9a3b-efacb300d757\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 14:16:14 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 585b757574,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:585b757574] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003b40e08 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} +Oct 27 14:16:15.695: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": +Oct 27 14:16:15.695: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-3685 adab26cb-8799-4bcc-a93a-952c4015245a 10702 2 2021-10-27 14:16:10 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 30a3f122-f25b-4a7d-9a3b-efacb300d757 0xc003b40c27 0xc003b40c28}] [] [{e2e.test Update apps/v1 2021-10-27 14:16:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 14:16:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"30a3f122-f25b-4a7d-9a3b-efacb300d757\"}":{}}},"f:spec":{"f:replicas":{}}} } {kube-controller-manager Update apps/v1 2021-10-27 14:16:14 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003b40ce8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Oct 27 14:16:15.786: INFO: Pod "test-rolling-update-deployment-585b757574-b97cl" is available: +&Pod{ObjectMeta:{test-rolling-update-deployment-585b757574-b97cl test-rolling-update-deployment-585b757574- deployment-3685 3ec48a53-79f8-4132-b8f6-60bfeb72f25e 10695 0 2021-10-27 14:16:12 +0000 UTC map[name:sample-pod pod-template-hash:585b757574] map[cni.projectcalico.org/containerID:9bac1cafb334bbf3fc669a9b167ac468987aee21d4f2ae5c144f8f5659bfddb1 cni.projectcalico.org/podIP:100.96.1.80/32 cni.projectcalico.org/podIPs:100.96.1.80/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet test-rolling-update-deployment-585b757574 05d407bc-abe5-46a1-a492-15e4ce3f83d7 0xc003b41267 0xc003b41268}] [] [{kube-controller-manager Update v1 2021-10-27 14:16:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"05d407bc-abe5-46a1-a492-15e4ce3f83d7\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-27 14:16:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-27 14:16:14 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.1.80\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-pbjh8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tm94z-0j6.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pbjh8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-28-25.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:16:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:16:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:16:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:16:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.28.25,PodIP:100.96.1.80,StartTime:2021-10-27 14:16:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-27 14:16:13 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:docker://e74c2c4dd67fe5f6da22671010b76a21487714069fa8a05f8a82ff82ad3828ca,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.1.80,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:16:15.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-3685" for this suite. +•{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":346,"completed":60,"skipped":996,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:16:16.056: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-3675 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name configmap-test-volume-9c39f5a5-27b0-4600-923b-b3dafa8129a8 +STEP: Creating a pod to test consume configMaps +Oct 27 14:16:16.975: INFO: Waiting up to 5m0s for pod "pod-configmaps-635347f6-e69f-4645-9a4b-f9202c356b3c" in namespace "configmap-3675" to be "Succeeded or Failed" +Oct 27 14:16:17.065: INFO: Pod "pod-configmaps-635347f6-e69f-4645-9a4b-f9202c356b3c": Phase="Pending", Reason="", readiness=false. Elapsed: 90.015113ms +Oct 27 14:16:19.156: INFO: Pod "pod-configmaps-635347f6-e69f-4645-9a4b-f9202c356b3c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.181211644s +STEP: Saw pod success +Oct 27 14:16:19.156: INFO: Pod "pod-configmaps-635347f6-e69f-4645-9a4b-f9202c356b3c" satisfied condition "Succeeded or Failed" +Oct 27 14:16:19.246: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod pod-configmaps-635347f6-e69f-4645-9a4b-f9202c356b3c container agnhost-container: +STEP: delete the pod +Oct 27 14:16:19.467: INFO: Waiting for pod pod-configmaps-635347f6-e69f-4645-9a4b-f9202c356b3c to disappear +Oct 27 14:16:19.557: INFO: Pod pod-configmaps-635347f6-e69f-4645-9a4b-f9202c356b3c no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:16:19.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-3675" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":61,"skipped":1052,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide container's memory limit [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:16:19.882: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-6082 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should provide container's memory limit [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 27 14:16:20.766: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8bb1a801-96ef-4112-9fc2-0e6b1e385715" in namespace "projected-6082" to be "Succeeded or Failed" +Oct 27 14:16:20.902: INFO: Pod "downwardapi-volume-8bb1a801-96ef-4112-9fc2-0e6b1e385715": Phase="Pending", Reason="", readiness=false. Elapsed: 136.036412ms +Oct 27 14:16:22.994: INFO: Pod "downwardapi-volume-8bb1a801-96ef-4112-9fc2-0e6b1e385715": Phase="Pending", Reason="", readiness=false. Elapsed: 2.22754435s +Oct 27 14:16:25.085: INFO: Pod "downwardapi-volume-8bb1a801-96ef-4112-9fc2-0e6b1e385715": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.319132481s +STEP: Saw pod success +Oct 27 14:16:25.086: INFO: Pod "downwardapi-volume-8bb1a801-96ef-4112-9fc2-0e6b1e385715" satisfied condition "Succeeded or Failed" +Oct 27 14:16:25.176: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod downwardapi-volume-8bb1a801-96ef-4112-9fc2-0e6b1e385715 container client-container: +STEP: delete the pod +Oct 27 14:16:25.369: INFO: Waiting for pod downwardapi-volume-8bb1a801-96ef-4112-9fc2-0e6b1e385715 to disappear +Oct 27 14:16:25.459: INFO: Pod downwardapi-volume-8bb1a801-96ef-4112-9fc2-0e6b1e385715 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:16:25.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-6082" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":346,"completed":62,"skipped":1113,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Pods + should be updated [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:16:25.731: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename pods +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-9356 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:188 +[It] should be updated [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pod +STEP: submitting the pod to kubernetes +Oct 27 14:16:26.649: INFO: The status of Pod pod-update-6c6c4765-7538-452c-92bd-970de3145d98 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:16:28.740: INFO: The status of Pod pod-update-6c6c4765-7538-452c-92bd-970de3145d98 is Running (Ready = true) +STEP: verifying the pod is in kubernetes +STEP: updating the pod +Oct 27 14:16:29.605: INFO: Successfully updated pod "pod-update-6c6c4765-7538-452c-92bd-970de3145d98" +STEP: verifying the updated pod is in kubernetes +Oct 27 14:16:29.786: INFO: Pod update OK +[AfterEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:16:29.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-9356" for this suite. +•{"msg":"PASSED [sig-node] Pods should be updated [NodeConformance] [Conformance]","total":346,"completed":63,"skipped":1180,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:16:30.056: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-464 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating projection with secret that has name projected-secret-test-83675de4-ac84-48bf-adc6-4eacde208a73 +STEP: Creating a pod to test consume secrets +Oct 27 14:16:30.982: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e2ebf011-c1f7-4713-8262-eee25d3d96c1" in namespace "projected-464" to be "Succeeded or Failed" +Oct 27 14:16:31.072: INFO: Pod "pod-projected-secrets-e2ebf011-c1f7-4713-8262-eee25d3d96c1": Phase="Pending", Reason="", readiness=false. Elapsed: 90.269115ms +Oct 27 14:16:33.164: INFO: Pod "pod-projected-secrets-e2ebf011-c1f7-4713-8262-eee25d3d96c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.181928955s +STEP: Saw pod success +Oct 27 14:16:33.164: INFO: Pod "pod-projected-secrets-e2ebf011-c1f7-4713-8262-eee25d3d96c1" satisfied condition "Succeeded or Failed" +Oct 27 14:16:33.255: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod pod-projected-secrets-e2ebf011-c1f7-4713-8262-eee25d3d96c1 container projected-secret-volume-test: +STEP: delete the pod +Oct 27 14:16:33.447: INFO: Waiting for pod pod-projected-secrets-e2ebf011-c1f7-4713-8262-eee25d3d96c1 to disappear +Oct 27 14:16:33.537: INFO: Pod pod-projected-secrets-e2ebf011-c1f7-4713-8262-eee25d3d96c1 no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:16:33.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-464" for this suite. +•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":64,"skipped":1232,"failed":0} +SSS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with secret pod [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:16:33.809: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename subpath +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in subpath-4971 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] Atomic writer volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 +STEP: Setting up data +[It] should support subpaths with secret pod [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating pod pod-subpath-test-secret-jdtg +STEP: Creating a pod to test atomic-volume-subpath +Oct 27 14:16:34.820: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-jdtg" in namespace "subpath-4971" to be "Succeeded or Failed" +Oct 27 14:16:34.911: INFO: Pod "pod-subpath-test-secret-jdtg": Phase="Pending", Reason="", readiness=false. Elapsed: 90.789325ms +Oct 27 14:16:37.002: INFO: Pod "pod-subpath-test-secret-jdtg": Phase="Running", Reason="", readiness=true. Elapsed: 2.181995995s +Oct 27 14:16:39.094: INFO: Pod "pod-subpath-test-secret-jdtg": Phase="Running", Reason="", readiness=true. Elapsed: 4.273913584s +Oct 27 14:16:41.187: INFO: Pod "pod-subpath-test-secret-jdtg": Phase="Running", Reason="", readiness=true. Elapsed: 6.366190351s +Oct 27 14:16:43.278: INFO: Pod "pod-subpath-test-secret-jdtg": Phase="Running", Reason="", readiness=true. Elapsed: 8.457898991s +Oct 27 14:16:45.370: INFO: Pod "pod-subpath-test-secret-jdtg": Phase="Running", Reason="", readiness=true. Elapsed: 10.5498424s +Oct 27 14:16:47.462: INFO: Pod "pod-subpath-test-secret-jdtg": Phase="Running", Reason="", readiness=true. Elapsed: 12.641364615s +Oct 27 14:16:49.553: INFO: Pod "pod-subpath-test-secret-jdtg": Phase="Running", Reason="", readiness=true. Elapsed: 14.732199218s +Oct 27 14:16:51.645: INFO: Pod "pod-subpath-test-secret-jdtg": Phase="Running", Reason="", readiness=true. Elapsed: 16.824339986s +Oct 27 14:16:53.736: INFO: Pod "pod-subpath-test-secret-jdtg": Phase="Running", Reason="", readiness=true. Elapsed: 18.915310343s +Oct 27 14:16:55.827: INFO: Pod "pod-subpath-test-secret-jdtg": Phase="Running", Reason="", readiness=true. Elapsed: 21.006795381s +Oct 27 14:16:57.918: INFO: Pod "pod-subpath-test-secret-jdtg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.097833806s +STEP: Saw pod success +Oct 27 14:16:57.918: INFO: Pod "pod-subpath-test-secret-jdtg" satisfied condition "Succeeded or Failed" +Oct 27 14:16:58.009: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod pod-subpath-test-secret-jdtg container test-container-subpath-secret-jdtg: +STEP: delete the pod +Oct 27 14:16:58.209: INFO: Waiting for pod pod-subpath-test-secret-jdtg to disappear +Oct 27 14:16:58.300: INFO: Pod pod-subpath-test-secret-jdtg no longer exists +STEP: Deleting pod pod-subpath-test-secret-jdtg +Oct 27 14:16:58.300: INFO: Deleting pod "pod-subpath-test-secret-jdtg" in namespace "subpath-4971" +[AfterEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:16:58.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "subpath-4971" for this suite. +•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":346,"completed":65,"skipped":1235,"failed":0} +SSSSS +------------------------------ +[sig-node] Sysctls [LinuxOnly] [NodeConformance] + should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:36 +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:16:58.660: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename sysctl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sysctl-7344 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:65 +[It] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod with one valid and two invalid sysctls +[AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:16:59.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sysctl-7344" for this suite. +•{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":346,"completed":66,"skipped":1240,"failed":0} +S +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:16:59.670: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-842 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0644 on tmpfs +Oct 27 14:17:00.499: INFO: Waiting up to 5m0s for pod "pod-579ed914-1fc5-4cd4-8fdc-d612604b56b4" in namespace "emptydir-842" to be "Succeeded or Failed" +Oct 27 14:17:00.590: INFO: Pod "pod-579ed914-1fc5-4cd4-8fdc-d612604b56b4": Phase="Pending", Reason="", readiness=false. Elapsed: 90.590935ms +Oct 27 14:17:02.681: INFO: Pod "pod-579ed914-1fc5-4cd4-8fdc-d612604b56b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.182064499s +STEP: Saw pod success +Oct 27 14:17:02.681: INFO: Pod "pod-579ed914-1fc5-4cd4-8fdc-d612604b56b4" satisfied condition "Succeeded or Failed" +Oct 27 14:17:02.772: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod pod-579ed914-1fc5-4cd4-8fdc-d612604b56b4 container test-container: +STEP: delete the pod +Oct 27 14:17:03.003: INFO: Waiting for pod pod-579ed914-1fc5-4cd4-8fdc-d612604b56b4 to disappear +Oct 27 14:17:03.093: INFO: Pod pod-579ed914-1fc5-4cd4-8fdc-d612604b56b4 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:17:03.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-842" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":67,"skipped":1241,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicationController + should adopt matching pods on creation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:17:03.363: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename replication-controller +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replication-controller-5332 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 +[It] should adopt matching pods on creation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Given a Pod with a 'name' label pod-adoption is created +Oct 27 14:17:04.365: INFO: The status of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:17:06.457: INFO: The status of Pod pod-adoption is Running (Ready = true) +STEP: When a replication controller with a matching selector is created +STEP: Then the orphan pod is adopted +[AfterEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:17:06.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replication-controller-5332" for this suite. +•{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":346,"completed":68,"skipped":1256,"failed":0} +S +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:17:06.999: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-4021 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0666 on tmpfs +Oct 27 14:17:07.827: INFO: Waiting up to 5m0s for pod "pod-86e55bca-26ce-4ab9-8a1c-9cb82e6b776e" in namespace "emptydir-4021" to be "Succeeded or Failed" +Oct 27 14:17:07.917: INFO: Pod "pod-86e55bca-26ce-4ab9-8a1c-9cb82e6b776e": Phase="Pending", Reason="", readiness=false. Elapsed: 90.263036ms +Oct 27 14:17:10.008: INFO: Pod "pod-86e55bca-26ce-4ab9-8a1c-9cb82e6b776e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.181554685s +STEP: Saw pod success +Oct 27 14:17:10.008: INFO: Pod "pod-86e55bca-26ce-4ab9-8a1c-9cb82e6b776e" satisfied condition "Succeeded or Failed" +Oct 27 14:17:10.099: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod pod-86e55bca-26ce-4ab9-8a1c-9cb82e6b776e container test-container: +STEP: delete the pod +Oct 27 14:17:10.502: INFO: Waiting for pod pod-86e55bca-26ce-4ab9-8a1c-9cb82e6b776e to disappear +Oct 27 14:17:10.592: INFO: Pod pod-86e55bca-26ce-4ab9-8a1c-9cb82e6b776e no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:17:10.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-4021" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":69,"skipped":1257,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:17:10.863: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-3517 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating projection with secret that has name projected-secret-test-3250d587-efac-4131-aecf-011a09c2165d +STEP: Creating a pod to test consume secrets +Oct 27 14:17:11.786: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-409b2449-4d3b-4e41-8f21-b1343c7e474a" in namespace "projected-3517" to be "Succeeded or Failed" +Oct 27 14:17:11.877: INFO: Pod "pod-projected-secrets-409b2449-4d3b-4e41-8f21-b1343c7e474a": Phase="Pending", Reason="", readiness=false. Elapsed: 90.725403ms +Oct 27 14:17:13.968: INFO: Pod "pod-projected-secrets-409b2449-4d3b-4e41-8f21-b1343c7e474a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.182006143s +STEP: Saw pod success +Oct 27 14:17:13.968: INFO: Pod "pod-projected-secrets-409b2449-4d3b-4e41-8f21-b1343c7e474a" satisfied condition "Succeeded or Failed" +Oct 27 14:17:14.059: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod pod-projected-secrets-409b2449-4d3b-4e41-8f21-b1343c7e474a container projected-secret-volume-test: +STEP: delete the pod +Oct 27 14:17:14.251: INFO: Waiting for pod pod-projected-secrets-409b2449-4d3b-4e41-8f21-b1343c7e474a to disappear +Oct 27 14:17:14.341: INFO: Pod pod-projected-secrets-409b2449-4d3b-4e41-8f21-b1343c7e474a no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:17:14.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-3517" for this suite. +•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":70,"skipped":1281,"failed":0} +SSS +------------------------------ +[sig-scheduling] SchedulerPredicates [Serial] + validates resource limits of pods that are allowed to run [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:17:14.611: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename sched-pred +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-pred-8614 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 +Oct 27 14:17:15.344: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready +Oct 27 14:17:15.526: INFO: Waiting for terminating namespaces to be deleted... +Oct 27 14:17:15.616: INFO: +Logging pods the apiserver thinks is on node ip-10-250-28-25.ec2.internal before test +Oct 27 14:17:15.798: INFO: addons-nginx-ingress-controller-b7784495c-9bd2v from kube-system started at 2021-10-27 13:56:28 +0000 UTC (1 container statuses recorded) +Oct 27 14:17:15.798: INFO: Container nginx-ingress-controller ready: true, restart count 0 +Oct 27 14:17:15.798: INFO: apiserver-proxy-kb6fx from kube-system started at 2021-10-27 13:53:35 +0000 UTC (2 container statuses recorded) +Oct 27 14:17:15.798: INFO: Container proxy ready: true, restart count 0 +Oct 27 14:17:15.798: INFO: Container sidecar ready: true, restart count 0 +Oct 27 14:17:15.798: INFO: blackbox-exporter-65c549b94c-kw2mt from kube-system started at 2021-10-27 14:00:28 +0000 UTC (1 container statuses recorded) +Oct 27 14:17:15.798: INFO: Container blackbox-exporter ready: true, restart count 0 +Oct 27 14:17:15.798: INFO: calico-node-pqn8p from kube-system started at 2021-10-27 13:55:42 +0000 UTC (1 container statuses recorded) +Oct 27 14:17:15.798: INFO: Container calico-node ready: true, restart count 0 +Oct 27 14:17:15.798: INFO: csi-driver-node-ddm2w from kube-system started at 2021-10-27 13:53:35 +0000 UTC (3 container statuses recorded) +Oct 27 14:17:15.798: INFO: Container csi-driver ready: true, restart count 0 +Oct 27 14:17:15.798: INFO: Container csi-liveness-probe ready: true, restart count 0 +Oct 27 14:17:15.798: INFO: Container csi-node-driver-registrar ready: true, restart count 0 +Oct 27 14:17:15.798: INFO: kube-proxy-tnk6p from kube-system started at 2021-10-27 13:56:34 +0000 UTC (2 container statuses recorded) +Oct 27 14:17:15.798: INFO: Container conntrack-fix ready: true, restart count 0 +Oct 27 14:17:15.798: INFO: Container kube-proxy ready: true, restart count 0 +Oct 27 14:17:15.798: INFO: node-exporter-jhkvj from kube-system started at 2021-10-27 13:53:35 +0000 UTC (1 container statuses recorded) +Oct 27 14:17:15.798: INFO: Container node-exporter ready: true, restart count 0 +Oct 27 14:17:15.798: INFO: node-problem-detector-l6hpl from kube-system started at 2021-10-27 13:53:35 +0000 UTC (1 container statuses recorded) +Oct 27 14:17:15.798: INFO: Container node-problem-detector ready: true, restart count 0 +Oct 27 14:17:15.798: INFO: +Logging pods the apiserver thinks is on node ip-10-250-9-48.ec2.internal before test +Oct 27 14:17:15.980: INFO: addons-nginx-ingress-nginx-ingress-k8s-backend-56d9d84c8c-bnwpb from kube-system started at 2021-10-27 13:53:42 +0000 UTC (1 container statuses recorded) +Oct 27 14:17:15.980: INFO: Container nginx-ingress-nginx-ingress-k8s-backend ready: true, restart count 0 +Oct 27 14:17:15.980: INFO: apiserver-proxy-4k9m7 from kube-system started at 2021-10-27 13:53:22 +0000 UTC (2 container statuses recorded) +Oct 27 14:17:15.980: INFO: Container proxy ready: true, restart count 0 +Oct 27 14:17:15.980: INFO: Container sidecar ready: true, restart count 0 +Oct 27 14:17:15.980: INFO: calico-kube-controllers-56bcbfb5c5-nhtm5 from kube-system started at 2021-10-27 13:53:22 +0000 UTC (1 container statuses recorded) +Oct 27 14:17:15.980: INFO: Container calico-kube-controllers ready: true, restart count 0 +Oct 27 14:17:15.980: INFO: calico-node-pcdrk from kube-system started at 2021-10-27 13:55:32 +0000 UTC (1 container statuses recorded) +Oct 27 14:17:15.980: INFO: Container calico-node ready: true, restart count 0 +Oct 27 14:17:15.980: INFO: calico-node-vertical-autoscaler-785b5f968-89m6j from kube-system started at 2021-10-27 13:53:42 +0000 UTC (1 container statuses recorded) +Oct 27 14:17:15.980: INFO: Container autoscaler ready: true, restart count 0 +Oct 27 14:17:15.980: INFO: calico-typha-deploy-546b97d4b5-xrvqz from kube-system started at 2021-10-27 13:53:22 +0000 UTC (1 container statuses recorded) +Oct 27 14:17:15.980: INFO: Container calico-typha ready: true, restart count 0 +Oct 27 14:17:15.980: INFO: calico-typha-horizontal-autoscaler-5b58bb446c-gbzpp from kube-system started at 2021-10-27 13:53:42 +0000 UTC (1 container statuses recorded) +Oct 27 14:17:15.980: INFO: Container autoscaler ready: true, restart count 0 +Oct 27 14:17:15.980: INFO: calico-typha-vertical-autoscaler-5c9655cddd-wwsqk from kube-system started at 2021-10-27 13:53:42 +0000 UTC (1 container statuses recorded) +Oct 27 14:17:15.980: INFO: Container autoscaler ready: true, restart count 0 +Oct 27 14:17:15.980: INFO: coredns-746d4d76f8-nqpnh from kube-system started at 2021-10-27 13:53:42 +0000 UTC (1 container statuses recorded) +Oct 27 14:17:15.980: INFO: Container coredns ready: true, restart count 0 +Oct 27 14:17:15.980: INFO: coredns-746d4d76f8-zksdl from kube-system started at 2021-10-27 13:53:42 +0000 UTC (1 container statuses recorded) +Oct 27 14:17:15.980: INFO: Container coredns ready: true, restart count 0 +Oct 27 14:17:15.980: INFO: csi-driver-node-cwstr from kube-system started at 2021-10-27 13:53:22 +0000 UTC (3 container statuses recorded) +Oct 27 14:17:15.980: INFO: Container csi-driver ready: true, restart count 0 +Oct 27 14:17:15.980: INFO: Container csi-liveness-probe ready: true, restart count 0 +Oct 27 14:17:15.980: INFO: Container csi-node-driver-registrar ready: true, restart count 0 +Oct 27 14:17:15.980: INFO: kube-proxy-d8j27 from kube-system started at 2021-10-27 13:56:29 +0000 UTC (2 container statuses recorded) +Oct 27 14:17:15.980: INFO: Container conntrack-fix ready: true, restart count 0 +Oct 27 14:17:15.980: INFO: Container kube-proxy ready: true, restart count 0 +Oct 27 14:17:15.980: INFO: metrics-server-98f7b76bf-s6v4j from kube-system started at 2021-10-27 13:53:42 +0000 UTC (1 container statuses recorded) +Oct 27 14:17:15.980: INFO: Container metrics-server ready: true, restart count 0 +Oct 27 14:17:15.980: INFO: node-exporter-27q2j from kube-system started at 2021-10-27 13:53:22 +0000 UTC (1 container statuses recorded) +Oct 27 14:17:15.980: INFO: Container node-exporter ready: true, restart count 0 +Oct 27 14:17:15.980: INFO: node-problem-detector-f6k47 from kube-system started at 2021-10-27 13:53:22 +0000 UTC (1 container statuses recorded) +Oct 27 14:17:15.980: INFO: Container node-problem-detector ready: true, restart count 0 +Oct 27 14:17:15.980: INFO: vpn-shoot-77846799c6-lvhrh from kube-system started at 2021-10-27 13:53:42 +0000 UTC (1 container statuses recorded) +Oct 27 14:17:15.980: INFO: Container vpn-shoot ready: true, restart count 0 +Oct 27 14:17:15.980: INFO: dashboard-metrics-scraper-7ccbfc448f-8vkgz from kubernetes-dashboard started at 2021-10-27 13:53:42 +0000 UTC (1 container statuses recorded) +Oct 27 14:17:15.980: INFO: Container dashboard-metrics-scraper ready: true, restart count 0 +Oct 27 14:17:15.980: INFO: kubernetes-dashboard-5484586d8f-2hskr from kubernetes-dashboard started at 2021-10-27 13:53:42 +0000 UTC (1 container statuses recorded) +Oct 27 14:17:15.980: INFO: Container kubernetes-dashboard ready: true, restart count 0 +[It] validates resource limits of pods that are allowed to run [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: verifying the node has the label node ip-10-250-28-25.ec2.internal +STEP: verifying the node has the label node ip-10-250-9-48.ec2.internal +Oct 27 14:17:16.547: INFO: Pod addons-nginx-ingress-controller-b7784495c-9bd2v requesting resource cpu=100m on Node ip-10-250-28-25.ec2.internal +Oct 27 14:17:16.547: INFO: Pod addons-nginx-ingress-nginx-ingress-k8s-backend-56d9d84c8c-bnwpb requesting resource cpu=0m on Node ip-10-250-9-48.ec2.internal +Oct 27 14:17:16.547: INFO: Pod apiserver-proxy-4k9m7 requesting resource cpu=40m on Node ip-10-250-9-48.ec2.internal +Oct 27 14:17:16.547: INFO: Pod apiserver-proxy-kb6fx requesting resource cpu=40m on Node ip-10-250-28-25.ec2.internal +Oct 27 14:17:16.547: INFO: Pod blackbox-exporter-65c549b94c-kw2mt requesting resource cpu=11m on Node ip-10-250-28-25.ec2.internal +Oct 27 14:17:16.547: INFO: Pod calico-kube-controllers-56bcbfb5c5-nhtm5 requesting resource cpu=10m on Node ip-10-250-9-48.ec2.internal +Oct 27 14:17:16.547: INFO: Pod calico-node-pcdrk requesting resource cpu=250m on Node ip-10-250-9-48.ec2.internal +Oct 27 14:17:16.547: INFO: Pod calico-node-pqn8p requesting resource cpu=250m on Node ip-10-250-28-25.ec2.internal +Oct 27 14:17:16.547: INFO: Pod calico-node-vertical-autoscaler-785b5f968-89m6j requesting resource cpu=10m on Node ip-10-250-9-48.ec2.internal +Oct 27 14:17:16.547: INFO: Pod calico-typha-deploy-546b97d4b5-xrvqz requesting resource cpu=200m on Node ip-10-250-9-48.ec2.internal +Oct 27 14:17:16.547: INFO: Pod calico-typha-horizontal-autoscaler-5b58bb446c-gbzpp requesting resource cpu=10m on Node ip-10-250-9-48.ec2.internal +Oct 27 14:17:16.547: INFO: Pod calico-typha-vertical-autoscaler-5c9655cddd-wwsqk requesting resource cpu=10m on Node ip-10-250-9-48.ec2.internal +Oct 27 14:17:16.547: INFO: Pod coredns-746d4d76f8-nqpnh requesting resource cpu=50m on Node ip-10-250-9-48.ec2.internal +Oct 27 14:17:16.547: INFO: Pod coredns-746d4d76f8-zksdl requesting resource cpu=50m on Node ip-10-250-9-48.ec2.internal +Oct 27 14:17:16.547: INFO: Pod csi-driver-node-cwstr requesting resource cpu=40m on Node ip-10-250-9-48.ec2.internal +Oct 27 14:17:16.547: INFO: Pod csi-driver-node-ddm2w requesting resource cpu=40m on Node ip-10-250-28-25.ec2.internal +Oct 27 14:17:16.547: INFO: Pod kube-proxy-d8j27 requesting resource cpu=34m on Node ip-10-250-9-48.ec2.internal +Oct 27 14:17:16.547: INFO: Pod kube-proxy-tnk6p requesting resource cpu=34m on Node ip-10-250-28-25.ec2.internal +Oct 27 14:17:16.547: INFO: Pod metrics-server-98f7b76bf-s6v4j requesting resource cpu=50m on Node ip-10-250-9-48.ec2.internal +Oct 27 14:17:16.547: INFO: Pod node-exporter-27q2j requesting resource cpu=50m on Node ip-10-250-9-48.ec2.internal +Oct 27 14:17:16.547: INFO: Pod node-exporter-jhkvj requesting resource cpu=50m on Node ip-10-250-28-25.ec2.internal +Oct 27 14:17:16.547: INFO: Pod node-problem-detector-f6k47 requesting resource cpu=20m on Node ip-10-250-9-48.ec2.internal +Oct 27 14:17:16.547: INFO: Pod node-problem-detector-l6hpl requesting resource cpu=20m on Node ip-10-250-28-25.ec2.internal +Oct 27 14:17:16.547: INFO: Pod vpn-shoot-77846799c6-lvhrh requesting resource cpu=100m on Node ip-10-250-9-48.ec2.internal +Oct 27 14:17:16.547: INFO: Pod dashboard-metrics-scraper-7ccbfc448f-8vkgz requesting resource cpu=0m on Node ip-10-250-9-48.ec2.internal +Oct 27 14:17:16.547: INFO: Pod kubernetes-dashboard-5484586d8f-2hskr requesting resource cpu=50m on Node ip-10-250-9-48.ec2.internal +STEP: Starting Pods to consume most of the cluster CPU. +Oct 27 14:17:16.547: INFO: Creating a pod which consumes cpu=962m on Node ip-10-250-28-25.ec2.internal +Oct 27 14:17:16.644: INFO: Creating a pod which consumes cpu=662m on Node ip-10-250-9-48.ec2.internal +STEP: Creating another pod that requires unavailable amount of CPU. +STEP: Considering event: +Type = [Normal], Name = [filler-pod-7c12e9ef-9e22-4863-bedd-4ee96e25366a.16b1e8eb3436d3dd], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8614/filler-pod-7c12e9ef-9e22-4863-bedd-4ee96e25366a to ip-10-250-28-25.ec2.internal] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-7c12e9ef-9e22-4863-bedd-4ee96e25366a.16b1e8eb691a4d37], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.5" already present on machine] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-7c12e9ef-9e22-4863-bedd-4ee96e25366a.16b1e8eb6a2e84fd], Reason = [Created], Message = [Created container filler-pod-7c12e9ef-9e22-4863-bedd-4ee96e25366a] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-7c12e9ef-9e22-4863-bedd-4ee96e25366a.16b1e8eb6fff2082], Reason = [Started], Message = [Started container filler-pod-7c12e9ef-9e22-4863-bedd-4ee96e25366a] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-89d5f34d-c1ed-4036-973d-0ba4eadc72f2.16b1e8eb39cb15d4], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8614/filler-pod-89d5f34d-c1ed-4036-973d-0ba4eadc72f2 to ip-10-250-9-48.ec2.internal] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-89d5f34d-c1ed-4036-973d-0ba4eadc72f2.16b1e8eb76ba9ab8], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.5" already present on machine] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-89d5f34d-c1ed-4036-973d-0ba4eadc72f2.16b1e8eb785f3b09], Reason = [Created], Message = [Created container filler-pod-89d5f34d-c1ed-4036-973d-0ba4eadc72f2] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-89d5f34d-c1ed-4036-973d-0ba4eadc72f2.16b1e8eb7dc5201c], Reason = [Started], Message = [Started container filler-pod-89d5f34d-c1ed-4036-973d-0ba4eadc72f2] +STEP: Considering event: +Type = [Warning], Name = [additional-pod.16b1e8ebd17be00a], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.] +STEP: removing the label node off the node ip-10-250-28-25.ec2.internal +STEP: verifying the node doesn't have the label node +STEP: removing the label node off the node ip-10-250-9-48.ec2.internal +STEP: verifying the node doesn't have the label node +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:17:21.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-pred-8614" for this suite. +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 +•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":346,"completed":71,"skipped":1284,"failed":0} +SSSSS +------------------------------ +[sig-node] Variable Expansion + should allow substituting values in a container's command [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:17:21.277: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename var-expansion +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in var-expansion-5938 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should allow substituting values in a container's command [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test substitution in container's command +Oct 27 14:17:22.105: INFO: Waiting up to 5m0s for pod "var-expansion-e12ca2e2-e0de-4740-ba25-c78982e572c0" in namespace "var-expansion-5938" to be "Succeeded or Failed" +Oct 27 14:17:22.195: INFO: Pod "var-expansion-e12ca2e2-e0de-4740-ba25-c78982e572c0": Phase="Pending", Reason="", readiness=false. Elapsed: 90.336678ms +Oct 27 14:17:24.287: INFO: Pod "var-expansion-e12ca2e2-e0de-4740-ba25-c78982e572c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.181443297s +STEP: Saw pod success +Oct 27 14:17:24.287: INFO: Pod "var-expansion-e12ca2e2-e0de-4740-ba25-c78982e572c0" satisfied condition "Succeeded or Failed" +Oct 27 14:17:24.377: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod var-expansion-e12ca2e2-e0de-4740-ba25-c78982e572c0 container dapi-container: +STEP: delete the pod +Oct 27 14:17:24.572: INFO: Waiting for pod var-expansion-e12ca2e2-e0de-4740-ba25-c78982e572c0 to disappear +Oct 27 14:17:24.662: INFO: Pod var-expansion-e12ca2e2-e0de-4740-ba25-c78982e572c0 no longer exists +[AfterEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:17:24.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-5938" for this suite. +•{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":346,"completed":72,"skipped":1289,"failed":0} +SSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should verify ResourceQuota with best effort scope. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:17:24.933: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename resourcequota +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-4560 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should verify ResourceQuota with best effort scope. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a ResourceQuota with best effort scope +STEP: Ensuring ResourceQuota status is calculated +STEP: Creating a ResourceQuota with not best effort scope +STEP: Ensuring ResourceQuota status is calculated +STEP: Creating a best-effort pod +STEP: Ensuring resource quota with best effort scope captures the pod usage +STEP: Ensuring resource quota with not best effort ignored the pod usage +STEP: Deleting the pod +STEP: Ensuring resource quota status released the pod usage +STEP: Creating a not best-effort pod +STEP: Ensuring resource quota with not best effort scope captures the pod usage +STEP: Ensuring resource quota with best effort scope ignored the pod usage +STEP: Deleting the pod +STEP: Ensuring resource quota status released the pod usage +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:17:43.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-4560" for this suite. +•{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":346,"completed":73,"skipped":1297,"failed":0} +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] ConfigMap + should be consumable via environment variable [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:17:43.274: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-838 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable via environment variable [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap configmap-838/configmap-test-8cbaa594-7495-4946-ba73-fa5aa10e4618 +STEP: Creating a pod to test consume configMaps +Oct 27 14:17:44.193: INFO: Waiting up to 5m0s for pod "pod-configmaps-6a1f2453-81e9-4e88-8b02-e4fc94ea7842" in namespace "configmap-838" to be "Succeeded or Failed" +Oct 27 14:17:44.283: INFO: Pod "pod-configmaps-6a1f2453-81e9-4e88-8b02-e4fc94ea7842": Phase="Pending", Reason="", readiness=false. Elapsed: 90.32963ms +Oct 27 14:17:46.374: INFO: Pod "pod-configmaps-6a1f2453-81e9-4e88-8b02-e4fc94ea7842": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.180908789s +STEP: Saw pod success +Oct 27 14:17:46.374: INFO: Pod "pod-configmaps-6a1f2453-81e9-4e88-8b02-e4fc94ea7842" satisfied condition "Succeeded or Failed" +Oct 27 14:17:46.464: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod pod-configmaps-6a1f2453-81e9-4e88-8b02-e4fc94ea7842 container env-test: +STEP: delete the pod +Oct 27 14:17:46.656: INFO: Waiting for pod pod-configmaps-6a1f2453-81e9-4e88-8b02-e4fc94ea7842 to disappear +Oct 27 14:17:46.746: INFO: Pod pod-configmaps-6a1f2453-81e9-4e88-8b02-e4fc94ea7842 no longer exists +[AfterEach] [sig-node] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:17:46.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-838" for this suite. +•{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":346,"completed":74,"skipped":1315,"failed":0} +SSS +------------------------------ +[sig-apps] DisruptionController + should update/patch PodDisruptionBudget status [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:17:47.017: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename disruption +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in disruption-2485 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 +[It] should update/patch PodDisruptionBudget status [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Waiting for the pdb to be processed +STEP: Updating PodDisruptionBudget status +STEP: Waiting for all pods to be running +Oct 27 14:17:48.130: INFO: running pods: 0 < 1 +STEP: locating a running pod +STEP: Waiting for the pdb to be processed +STEP: Patching PodDisruptionBudget status +STEP: Waiting for the pdb to be processed +[AfterEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:17:50.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "disruption-2485" for this suite. +•{"msg":"PASSED [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]","total":346,"completed":75,"skipped":1318,"failed":0} +SSSSSS +------------------------------ +[sig-storage] EmptyDir wrapper volumes + should not cause race condition when used for configmaps [Serial] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir wrapper volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:17:51.218: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir-wrapper +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-wrapper-4076 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not cause race condition when used for configmaps [Serial] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating 50 configmaps +STEP: Creating RC which spawns configmap-volume pods +Oct 27 14:17:56.801: INFO: Pod name wrapped-volume-race-30f94761-232a-4ccf-b992-fd3869d7c67a: Found 1 pods out of 5 +Oct 27 14:18:02.171: INFO: Pod name wrapped-volume-race-30f94761-232a-4ccf-b992-fd3869d7c67a: Found 5 pods out of 5 +STEP: Ensuring each pod is running +STEP: deleting ReplicationController wrapped-volume-race-30f94761-232a-4ccf-b992-fd3869d7c67a in namespace emptydir-wrapper-4076, will wait for the garbage collector to delete the pods +Oct 27 14:18:02.908: INFO: Deleting ReplicationController wrapped-volume-race-30f94761-232a-4ccf-b992-fd3869d7c67a took: 91.836855ms +Oct 27 14:18:03.009: INFO: Terminating ReplicationController wrapped-volume-race-30f94761-232a-4ccf-b992-fd3869d7c67a pods took: 100.440664ms +STEP: Creating RC which spawns configmap-volume pods +Oct 27 14:18:05.801: INFO: Pod name wrapped-volume-race-f67c419a-0a53-4375-9a3b-f2e551120f65: Found 1 pods out of 5 +Oct 27 14:18:11.162: INFO: Pod name wrapped-volume-race-f67c419a-0a53-4375-9a3b-f2e551120f65: Found 5 pods out of 5 +STEP: Ensuring each pod is running +STEP: deleting ReplicationController wrapped-volume-race-f67c419a-0a53-4375-9a3b-f2e551120f65 in namespace emptydir-wrapper-4076, will wait for the garbage collector to delete the pods +Oct 27 14:18:11.901: INFO: Deleting ReplicationController wrapped-volume-race-f67c419a-0a53-4375-9a3b-f2e551120f65 took: 91.802411ms +Oct 27 14:18:12.001: INFO: Terminating ReplicationController wrapped-volume-race-f67c419a-0a53-4375-9a3b-f2e551120f65 pods took: 100.659275ms +STEP: Creating RC which spawns configmap-volume pods +Oct 27 14:18:14.601: INFO: Pod name wrapped-volume-race-07650a05-1a7a-4949-bf6c-7a0c4155c83c: Found 1 pods out of 5 +Oct 27 14:18:19.959: INFO: Pod name wrapped-volume-race-07650a05-1a7a-4949-bf6c-7a0c4155c83c: Found 5 pods out of 5 +STEP: Ensuring each pod is running +STEP: deleting ReplicationController wrapped-volume-race-07650a05-1a7a-4949-bf6c-7a0c4155c83c in namespace emptydir-wrapper-4076, will wait for the garbage collector to delete the pods +Oct 27 14:18:20.904: INFO: Deleting ReplicationController wrapped-volume-race-07650a05-1a7a-4949-bf6c-7a0c4155c83c took: 197.085794ms +Oct 27 14:18:21.005: INFO: Terminating ReplicationController wrapped-volume-race-07650a05-1a7a-4949-bf6c-7a0c4155c83c pods took: 101.097552ms +STEP: Cleaning up the configMaps +[AfterEach] [sig-storage] EmptyDir wrapper volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:18:28.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-wrapper-4076" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":346,"completed":76,"skipped":1324,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Watchers + should be able to start watching from a specific resource version [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:18:28.257: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename watch +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in watch-4053 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to start watching from a specific resource version [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a new configmap +STEP: modifying the configmap once +STEP: modifying the configmap a second time +STEP: deleting the configmap +STEP: creating a watch on configmaps from the resource version returned by the first update +STEP: Expecting to observe notifications for all changes to the configmap after the first update +Oct 27 14:18:29.623: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-4053 b5b0d1be-676f-4468-9485-738eb74d5f9d 12033 0 2021-10-27 14:18:29 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2021-10-27 14:18:29 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 27 14:18:29.624: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-4053 b5b0d1be-676f-4468-9485-738eb74d5f9d 12037 0 2021-10-27 14:18:29 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2021-10-27 14:18:29 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +[AfterEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:18:29.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "watch-4053" for this suite. +•{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":346,"completed":77,"skipped":1362,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-node] Variable Expansion + should fail substituting values in a volume subpath with backticks [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:18:29.806: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename var-expansion +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in var-expansion-9534 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should fail substituting values in a volume subpath with backticks [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:18:32.817: INFO: Deleting pod "var-expansion-63b0a43f-2ae0-4ada-9330-c44a30444a7c" in namespace "var-expansion-9534" +Oct 27 14:18:32.909: INFO: Wait up to 5m0s for pod "var-expansion-63b0a43f-2ae0-4ada-9330-c44a30444a7c" to be fully deleted +[AfterEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:18:35.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-9534" for this suite. +•{"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]","total":346,"completed":78,"skipped":1372,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Security Context + should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:18:35.373: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename security-context +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-4532 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser +Oct 27 14:18:36.201: INFO: Waiting up to 5m0s for pod "security-context-6de3a0d7-f503-4396-9564-f68a43c6548b" in namespace "security-context-4532" to be "Succeeded or Failed" +Oct 27 14:18:36.291: INFO: Pod "security-context-6de3a0d7-f503-4396-9564-f68a43c6548b": Phase="Pending", Reason="", readiness=false. Elapsed: 90.40871ms +Oct 27 14:18:38.383: INFO: Pod "security-context-6de3a0d7-f503-4396-9564-f68a43c6548b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.181563661s +STEP: Saw pod success +Oct 27 14:18:38.383: INFO: Pod "security-context-6de3a0d7-f503-4396-9564-f68a43c6548b" satisfied condition "Succeeded or Failed" +Oct 27 14:18:38.473: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod security-context-6de3a0d7-f503-4396-9564-f68a43c6548b container test-container: +STEP: delete the pod +Oct 27 14:18:38.703: INFO: Waiting for pod security-context-6de3a0d7-f503-4396-9564-f68a43c6548b to disappear +Oct 27 14:18:38.793: INFO: Pod security-context-6de3a0d7-f503-4396-9564-f68a43c6548b no longer exists +[AfterEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:18:38.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "security-context-4532" for this suite. +•{"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":346,"completed":79,"skipped":1398,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] ConfigMap + should be consumable via the environment [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:18:39.064: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-1561 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable via the environment [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap configmap-1561/configmap-test-b7f6c574-9c2a-46d6-a1fe-9cc854522bcb +STEP: Creating a pod to test consume configMaps +Oct 27 14:18:39.983: INFO: Waiting up to 5m0s for pod "pod-configmaps-de86691f-f86c-4bd2-9fe6-7ce896cb3c31" in namespace "configmap-1561" to be "Succeeded or Failed" +Oct 27 14:18:40.074: INFO: Pod "pod-configmaps-de86691f-f86c-4bd2-9fe6-7ce896cb3c31": Phase="Pending", Reason="", readiness=false. Elapsed: 90.324741ms +Oct 27 14:18:42.165: INFO: Pod "pod-configmaps-de86691f-f86c-4bd2-9fe6-7ce896cb3c31": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.181708156s +STEP: Saw pod success +Oct 27 14:18:42.165: INFO: Pod "pod-configmaps-de86691f-f86c-4bd2-9fe6-7ce896cb3c31" satisfied condition "Succeeded or Failed" +Oct 27 14:18:42.256: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod pod-configmaps-de86691f-f86c-4bd2-9fe6-7ce896cb3c31 container env-test: +STEP: delete the pod +Oct 27 14:18:42.449: INFO: Waiting for pod pod-configmaps-de86691f-f86c-4bd2-9fe6-7ce896cb3c31 to disappear +Oct 27 14:18:42.539: INFO: Pod pod-configmaps-de86691f-f86c-4bd2-9fe6-7ce896cb3c31 no longer exists +[AfterEach] [sig-node] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:18:42.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-1561" for this suite. +•{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":346,"completed":80,"skipped":1453,"failed":0} +SSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should verify changes to a daemon set status [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:18:42.810: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename daemonsets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in daemonsets-70 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:142 +[It] should verify changes to a daemon set status [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating simple DaemonSet "daemon-set" +STEP: Check that daemon pods launch on every node of the cluster. +Oct 27 14:18:44.268: INFO: Number of nodes with available pods: 0 +Oct 27 14:18:44.268: INFO: Node ip-10-250-28-25.ec2.internal is running more than one daemon pod +Oct 27 14:18:45.538: INFO: Number of nodes with available pods: 2 +Oct 27 14:18:45.538: INFO: Number of running nodes: 2, number of available pods: 2 +STEP: Getting /status +Oct 27 14:18:45.718: INFO: Daemon Set daemon-set has Conditions: [] +STEP: updating the DaemonSet Status +Oct 27 14:18:45.900: INFO: updatedStatus.Conditions: []v1.DaemonSetCondition{v1.DaemonSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"E2E", Message:"Set from e2e test"}} +STEP: watching for the daemon set status to be updated +Oct 27 14:18:45.990: INFO: Observed &DaemonSet event: ADDED +Oct 27 14:18:45.990: INFO: Observed &DaemonSet event: MODIFIED +Oct 27 14:18:46.115: INFO: Observed &DaemonSet event: MODIFIED +Oct 27 14:18:46.115: INFO: Observed &DaemonSet event: MODIFIED +Oct 27 14:18:46.115: INFO: Found daemon set daemon-set in namespace daemonsets-70 with labels: map[daemonset-name:daemon-set] annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] +Oct 27 14:18:46.115: INFO: Daemon set daemon-set has an updated status +STEP: patching the DaemonSet Status +STEP: watching for the daemon set status to be patched +Oct 27 14:18:46.297: INFO: Observed &DaemonSet event: ADDED +Oct 27 14:18:46.297: INFO: Observed &DaemonSet event: MODIFIED +Oct 27 14:18:46.297: INFO: Observed &DaemonSet event: MODIFIED +Oct 27 14:18:46.297: INFO: Observed &DaemonSet event: MODIFIED +Oct 27 14:18:46.297: INFO: Observed daemon set daemon-set in namespace daemonsets-70 with annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] +Oct 27 14:18:46.298: INFO: Observed &DaemonSet event: MODIFIED +Oct 27 14:18:46.298: INFO: Found daemon set daemon-set in namespace daemonsets-70 with labels: map[daemonset-name:daemon-set] annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusPatched True 0001-01-01 00:00:00 +0000 UTC }] +Oct 27 14:18:46.298: INFO: Daemon set daemon-set has a patched status +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:108 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-70, will wait for the garbage collector to delete the pods +Oct 27 14:18:46.673: INFO: Deleting DaemonSet.extensions daemon-set took: 91.269367ms +Oct 27 14:18:46.774: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.757221ms +Oct 27 14:18:48.565: INFO: Number of nodes with available pods: 0 +Oct 27 14:18:48.565: INFO: Number of running nodes: 0, number of available pods: 0 +Oct 27 14:18:48.655: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"12241"},"items":null} + +Oct 27 14:18:48.745: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"12241"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:18:49.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-70" for this suite. +•{"msg":"PASSED [sig-apps] Daemon set [Serial] should verify changes to a daemon set status [Conformance]","total":346,"completed":81,"skipped":1458,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Deployment + Deployment should have a working scale subresource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:18:49.200: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename deployment +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in deployment-9322 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 +[It] Deployment should have a working scale subresource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:18:49.931: INFO: Creating simple deployment test-new-deployment +Oct 27 14:18:50.292: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941129, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941129, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941129, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941129, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-847dcfb7fb\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: getting scale subresource +STEP: updating a scale subresource +STEP: verifying the deployment Spec.Replicas was modified +STEP: Patch a scale subresource +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 +Oct 27 14:18:53.020: INFO: Deployment "test-new-deployment": +&Deployment{ObjectMeta:{test-new-deployment deployment-9322 806d1906-ed3d-485a-9457-7fafdb4455dd 12302 3 2021-10-27 14:18:49 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 FieldsV1 {"f:spec":{"f:replicas":{}}} scale} {e2e.test Update apps/v1 2021-10-27 14:18:49 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 14:18:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*4,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003951e08 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:4,UpdatedReplicas:4,AvailableReplicas:1,UnavailableReplicas:3,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-new-deployment-847dcfb7fb" has successfully progressed.,LastUpdateTime:2021-10-27 14:18:51 +0000 UTC,LastTransitionTime:2021-10-27 14:18:49 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2021-10-27 14:18:52 +0000 UTC,LastTransitionTime:2021-10-27 14:18:52 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} + +Oct 27 14:18:53.111: INFO: New ReplicaSet "test-new-deployment-847dcfb7fb" of Deployment "test-new-deployment": +&ReplicaSet{ObjectMeta:{test-new-deployment-847dcfb7fb deployment-9322 bbae3bd2-fc05-4ef9-aaf4-21117da62118 12301 3 2021-10-27 14:18:49 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[deployment.kubernetes.io/desired-replicas:4 deployment.kubernetes.io/max-replicas:5 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-new-deployment 806d1906-ed3d-485a-9457-7fafdb4455dd 0xc00266a377 0xc00266a378}] [] [{kube-controller-manager Update apps/v1 2021-10-27 14:18:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"806d1906-ed3d-485a-9457-7fafdb4455dd\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 14:18:51 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*4,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 847dcfb7fb,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00266a418 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:4,FullyLabeledReplicas:4,ObservedGeneration:3,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} +Oct 27 14:18:53.203: INFO: Pod "test-new-deployment-847dcfb7fb-9z526" is available: +&Pod{ObjectMeta:{test-new-deployment-847dcfb7fb-9z526 test-new-deployment-847dcfb7fb- deployment-9322 ed565729-9eb7-4b53-8fc9-0207ab6c64fa 12267 0 2021-10-27 14:18:49 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/containerID:7300d546ba3ba5ad4e89c3e579a30aa894b62602e5e9bb86b7209f79676a3eeb cni.projectcalico.org/podIP:100.96.1.109/32 cni.projectcalico.org/podIPs:100.96.1.109/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet test-new-deployment-847dcfb7fb bbae3bd2-fc05-4ef9-aaf4-21117da62118 0xc00266aa37 0xc00266aa38}] [] [{kube-controller-manager Update v1 2021-10-27 14:18:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bbae3bd2-fc05-4ef9-aaf4-21117da62118\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-27 14:18:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-27 14:18:51 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.1.109\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-6jslt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tm94z-0j6.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6jslt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-28-25.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:18:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:18:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:18:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:18:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.28.25,PodIP:100.96.1.109,StartTime:2021-10-27 14:18:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-27 14:18:50 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://6ac41dc7a22fe4fa981abf48f90298274d796f6f77fcafd917d4befbe114b936,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.1.109,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 14:18:53.203: INFO: Pod "test-new-deployment-847dcfb7fb-hq8cb" is not available: +&Pod{ObjectMeta:{test-new-deployment-847dcfb7fb-hq8cb test-new-deployment-847dcfb7fb- deployment-9322 9354191f-b833-42d7-bb20-cb1c8319813e 12304 0 2021-10-27 14:18:52 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet test-new-deployment-847dcfb7fb bbae3bd2-fc05-4ef9-aaf4-21117da62118 0xc00266aee7 0xc00266aee8}] [] [{kube-controller-manager Update v1 2021-10-27 14:18:52 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bbae3bd2-fc05-4ef9-aaf4-21117da62118\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 14:18:52 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-f9mlw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tm94z-0j6.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-f9mlw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-9-48.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:18:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:18:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:18:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:18:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.9.48,PodIP:,StartTime:2021-10-27 14:18:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 14:18:53.203: INFO: Pod "test-new-deployment-847dcfb7fb-lcbl8" is not available: +&Pod{ObjectMeta:{test-new-deployment-847dcfb7fb-lcbl8 test-new-deployment-847dcfb7fb- deployment-9322 932b150d-3fc5-4d4f-8ae8-cee318641cf7 12303 0 2021-10-27 14:18:52 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet test-new-deployment-847dcfb7fb bbae3bd2-fc05-4ef9-aaf4-21117da62118 0xc00266b347 0xc00266b348}] [] [{kube-controller-manager Update v1 2021-10-27 14:18:52 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bbae3bd2-fc05-4ef9-aaf4-21117da62118\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 14:18:52 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-rbwvz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tm94z-0j6.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rbwvz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-9-48.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:18:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:18:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:18:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:18:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.9.48,PodIP:,StartTime:2021-10-27 14:18:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 14:18:53.203: INFO: Pod "test-new-deployment-847dcfb7fb-shhvq" is not available: +&Pod{ObjectMeta:{test-new-deployment-847dcfb7fb-shhvq test-new-deployment-847dcfb7fb- deployment-9322 16c8272a-e234-4608-823b-5845391e5cba 12291 0 2021-10-27 14:18:52 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet test-new-deployment-847dcfb7fb bbae3bd2-fc05-4ef9-aaf4-21117da62118 0xc00266b507 0xc00266b508}] [] [{kube-controller-manager Update v1 2021-10-27 14:18:52 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bbae3bd2-fc05-4ef9-aaf4-21117da62118\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 14:18:52 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-wzchk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tm94z-0j6.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wzchk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-28-25.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:18:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:18:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:18:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:18:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.28.25,PodIP:,StartTime:2021-10-27 14:18:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:18:53.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-9322" for this suite. +•{"msg":"PASSED [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]","total":346,"completed":82,"skipped":1491,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Secrets + should fail to create secret due to empty secret key [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:18:53.386: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-6184 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should fail to create secret due to empty secret key [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating projection with secret that has name secret-emptykey-test-52eb001d-a572-4b2a-a6c8-1823a1d81be1 +[AfterEach] [sig-node] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:18:54.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-6184" for this suite. +•{"msg":"PASSED [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]","total":346,"completed":83,"skipped":1507,"failed":0} +SSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:18:54.391: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-4782 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name projected-configmap-test-volume-map-afafe29a-d60a-480e-a762-911640b84459 +STEP: Creating a pod to test consume configMaps +Oct 27 14:18:55.310: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0bc291a2-3b53-43f6-88d1-93611fb9217c" in namespace "projected-4782" to be "Succeeded or Failed" +Oct 27 14:18:55.400: INFO: Pod "pod-projected-configmaps-0bc291a2-3b53-43f6-88d1-93611fb9217c": Phase="Pending", Reason="", readiness=false. Elapsed: 90.513187ms +Oct 27 14:18:57.492: INFO: Pod "pod-projected-configmaps-0bc291a2-3b53-43f6-88d1-93611fb9217c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.181857046s +STEP: Saw pod success +Oct 27 14:18:57.492: INFO: Pod "pod-projected-configmaps-0bc291a2-3b53-43f6-88d1-93611fb9217c" satisfied condition "Succeeded or Failed" +Oct 27 14:18:57.582: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod pod-projected-configmaps-0bc291a2-3b53-43f6-88d1-93611fb9217c container agnhost-container: +STEP: delete the pod +Oct 27 14:18:57.810: INFO: Waiting for pod pod-projected-configmaps-0bc291a2-3b53-43f6-88d1-93611fb9217c to disappear +Oct 27 14:18:57.901: INFO: Pod pod-projected-configmaps-0bc291a2-3b53-43f6-88d1-93611fb9217c no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:18:57.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-4782" for this suite. +•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":346,"completed":84,"skipped":1510,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:18:58.172: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-2976 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 27 14:18:58.999: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6e7c97cf-f175-48ae-937d-b5005a703237" in namespace "projected-2976" to be "Succeeded or Failed" +Oct 27 14:18:59.090: INFO: Pod "downwardapi-volume-6e7c97cf-f175-48ae-937d-b5005a703237": Phase="Pending", Reason="", readiness=false. Elapsed: 90.43582ms +Oct 27 14:19:01.180: INFO: Pod "downwardapi-volume-6e7c97cf-f175-48ae-937d-b5005a703237": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.180831757s +STEP: Saw pod success +Oct 27 14:19:01.180: INFO: Pod "downwardapi-volume-6e7c97cf-f175-48ae-937d-b5005a703237" satisfied condition "Succeeded or Failed" +Oct 27 14:19:01.270: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod downwardapi-volume-6e7c97cf-f175-48ae-937d-b5005a703237 container client-container: +STEP: delete the pod +Oct 27 14:19:01.503: INFO: Waiting for pod downwardapi-volume-6e7c97cf-f175-48ae-937d-b5005a703237 to disappear +Oct 27 14:19:01.593: INFO: Pod downwardapi-volume-6e7c97cf-f175-48ae-937d-b5005a703237 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:19:01.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-2976" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":85,"skipped":1522,"failed":0} + +------------------------------ +[sig-apps] ReplicaSet + should validate Replicaset Status endpoints [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:19:01.864: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename replicaset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replicaset-8526 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should validate Replicaset Status endpoints [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Create a Replicaset +STEP: Verify that the required pods have come up. +Oct 27 14:19:02.868: INFO: Pod name sample-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running +STEP: Getting /status +Oct 27 14:19:05.141: INFO: Replicaset test-rs has Conditions: [] +STEP: updating the Replicaset Status +Oct 27 14:19:05.323: INFO: updatedStatus.Conditions: []v1.ReplicaSetCondition{v1.ReplicaSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"E2E", Message:"Set from e2e test"}} +STEP: watching for the ReplicaSet status to be updated +Oct 27 14:19:05.413: INFO: Observed &ReplicaSet event: ADDED +Oct 27 14:19:05.414: INFO: Observed &ReplicaSet event: MODIFIED +Oct 27 14:19:05.414: INFO: Observed &ReplicaSet event: MODIFIED +Oct 27 14:19:05.414: INFO: Observed &ReplicaSet event: MODIFIED +Oct 27 14:19:05.414: INFO: Found replicaset test-rs in namespace replicaset-8526 with labels: map[name:sample-pod pod:httpd] annotations: map[] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] +Oct 27 14:19:05.415: INFO: Replicaset test-rs has an updated status +STEP: patching the Replicaset Status +Oct 27 14:19:05.415: INFO: Patch payload: {"status":{"conditions":[{"type":"StatusPatched","status":"True"}]}} +Oct 27 14:19:05.614: INFO: Patched status conditions: []v1.ReplicaSetCondition{v1.ReplicaSetCondition{Type:"StatusPatched", Status:"True", LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"", Message:""}} +STEP: watching for the Replicaset status to be patched +Oct 27 14:19:05.704: INFO: Observed &ReplicaSet event: ADDED +Oct 27 14:19:05.704: INFO: Observed &ReplicaSet event: MODIFIED +Oct 27 14:19:05.704: INFO: Observed &ReplicaSet event: MODIFIED +Oct 27 14:19:05.704: INFO: Observed &ReplicaSet event: MODIFIED +Oct 27 14:19:05.704: INFO: Observed replicaset test-rs in namespace replicaset-8526 with annotations: map[] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} +Oct 27 14:19:05.704: INFO: Observed &ReplicaSet event: MODIFIED +Oct 27 14:19:05.704: INFO: Found replicaset test-rs in namespace replicaset-8526 with labels: map[name:sample-pod pod:httpd] annotations: map[] & Conditions: {StatusPatched True 0001-01-01 00:00:00 +0000 UTC } +Oct 27 14:19:05.704: INFO: Replicaset test-rs has a patched status +[AfterEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:19:05.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replicaset-8526" for this suite. +•{"msg":"PASSED [sig-apps] ReplicaSet should validate Replicaset Status endpoints [Conformance]","total":346,"completed":86,"skipped":1522,"failed":0} +SSSSSS +------------------------------ +[sig-node] Security Context When creating a pod with privileged + should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:19:05.974: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename security-context-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-test-6109 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 +[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:19:06.803: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-63d599f5-8c4b-4a62-9922-64c58486f79e" in namespace "security-context-test-6109" to be "Succeeded or Failed" +Oct 27 14:19:06.893: INFO: Pod "busybox-privileged-false-63d599f5-8c4b-4a62-9922-64c58486f79e": Phase="Pending", Reason="", readiness=false. Elapsed: 90.011034ms +Oct 27 14:19:08.985: INFO: Pod "busybox-privileged-false-63d599f5-8c4b-4a62-9922-64c58486f79e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.181836823s +Oct 27 14:19:08.985: INFO: Pod "busybox-privileged-false-63d599f5-8c4b-4a62-9922-64c58486f79e" satisfied condition "Succeeded or Failed" +Oct 27 14:19:09.125: INFO: Got logs for pod "busybox-privileged-false-63d599f5-8c4b-4a62-9922-64c58486f79e": "ip: RTNETLINK answers: Operation not permitted\n" +[AfterEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:19:09.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "security-context-test-6109" for this suite. +•{"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":87,"skipped":1528,"failed":0} +SSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + updates the published spec when one version gets renamed [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:19:09.396: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-9633 +STEP: Waiting for a default service account to be provisioned in namespace +[It] updates the published spec when one version gets renamed [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: set up a multi version CRD +Oct 27 14:19:10.129: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: rename a version +STEP: check the new version name is served +STEP: check the old version name is removed +STEP: check the other version is not changed +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:19:42.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-9633" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":346,"completed":88,"skipped":1536,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + should list, patch and delete a collection of StatefulSets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:19:42.541: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename statefulset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-954 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:92 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:107 +STEP: Creating service test in namespace statefulset-954 +[It] should list, patch and delete a collection of StatefulSets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:19:43.549: INFO: Waiting for pod test-ss-0 to enter Running - Ready=true, currently Pending - Ready=false +Oct 27 14:19:53.641: INFO: Waiting for pod test-ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: patching the StatefulSet +Oct 27 14:19:54.097: INFO: Waiting for pod test-ss-0 to enter Running - Ready=true, currently Running - Ready=true +Oct 27 14:19:54.097: INFO: Waiting for pod test-ss-1 to enter Running - Ready=true, currently Pending - Ready=false +Oct 27 14:20:04.189: INFO: Waiting for pod test-ss-0 to enter Running - Ready=true, currently Running - Ready=true +Oct 27 14:20:04.189: INFO: Waiting for pod test-ss-1 to enter Running - Ready=true, currently Running - Ready=true +STEP: Listing all StatefulSets +STEP: Delete all of the StatefulSets +STEP: Verify that StatefulSets have been deleted +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118 +Oct 27 14:20:04.785: INFO: Deleting all statefulset in ns statefulset-954 +[AfterEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:20:05.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-954" for this suite. +•{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should list, patch and delete a collection of StatefulSets [Conformance]","total":346,"completed":89,"skipped":1551,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with configmap pod [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:20:05.239: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename subpath +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in subpath-6418 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] Atomic writer volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 +STEP: Setting up data +[It] should support subpaths with configmap pod [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating pod pod-subpath-test-configmap-s54z +STEP: Creating a pod to test atomic-volume-subpath +Oct 27 14:20:06.249: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-s54z" in namespace "subpath-6418" to be "Succeeded or Failed" +Oct 27 14:20:06.340: INFO: Pod "pod-subpath-test-configmap-s54z": Phase="Pending", Reason="", readiness=false. Elapsed: 90.448361ms +Oct 27 14:20:08.431: INFO: Pod "pod-subpath-test-configmap-s54z": Phase="Running", Reason="", readiness=true. Elapsed: 2.181697664s +Oct 27 14:20:10.522: INFO: Pod "pod-subpath-test-configmap-s54z": Phase="Running", Reason="", readiness=true. Elapsed: 4.272856204s +Oct 27 14:20:12.612: INFO: Pod "pod-subpath-test-configmap-s54z": Phase="Running", Reason="", readiness=true. Elapsed: 6.363049479s +Oct 27 14:20:14.704: INFO: Pod "pod-subpath-test-configmap-s54z": Phase="Running", Reason="", readiness=true. Elapsed: 8.454291432s +Oct 27 14:20:16.795: INFO: Pod "pod-subpath-test-configmap-s54z": Phase="Running", Reason="", readiness=true. Elapsed: 10.54567733s +Oct 27 14:20:18.887: INFO: Pod "pod-subpath-test-configmap-s54z": Phase="Running", Reason="", readiness=true. Elapsed: 12.637292045s +Oct 27 14:20:20.978: INFO: Pod "pod-subpath-test-configmap-s54z": Phase="Running", Reason="", readiness=true. Elapsed: 14.728966959s +Oct 27 14:20:23.069: INFO: Pod "pod-subpath-test-configmap-s54z": Phase="Running", Reason="", readiness=true. Elapsed: 16.819714379s +Oct 27 14:20:25.160: INFO: Pod "pod-subpath-test-configmap-s54z": Phase="Running", Reason="", readiness=true. Elapsed: 18.911029417s +Oct 27 14:20:27.252: INFO: Pod "pod-subpath-test-configmap-s54z": Phase="Running", Reason="", readiness=true. Elapsed: 21.002498026s +Oct 27 14:20:29.343: INFO: Pod "pod-subpath-test-configmap-s54z": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.093703948s +STEP: Saw pod success +Oct 27 14:20:29.343: INFO: Pod "pod-subpath-test-configmap-s54z" satisfied condition "Succeeded or Failed" +Oct 27 14:20:29.433: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod pod-subpath-test-configmap-s54z container test-container-subpath-configmap-s54z: +STEP: delete the pod +Oct 27 14:20:29.625: INFO: Waiting for pod pod-subpath-test-configmap-s54z to disappear +Oct 27 14:20:29.715: INFO: Pod pod-subpath-test-configmap-s54z no longer exists +STEP: Deleting pod pod-subpath-test-configmap-s54z +Oct 27 14:20:29.716: INFO: Deleting pod "pod-subpath-test-configmap-s54z" in namespace "subpath-6418" +[AfterEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:20:29.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "subpath-6418" for this suite. +•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":346,"completed":90,"skipped":1561,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-instrumentation] Events API + should ensure that an event can be fetched, patched, deleted, and listed [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-instrumentation] Events API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:20:30.077: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename events +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in events-5384 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-instrumentation] Events API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 +[It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a test event +STEP: listing events in all namespaces +STEP: listing events in test namespace +STEP: listing events with field selection filtering on source +STEP: listing events with field selection filtering on reportingController +STEP: getting the test event +STEP: patching the test event +STEP: getting the test event +STEP: updating the test event +STEP: getting the test event +STEP: deleting the test event +STEP: listing events in all namespaces +STEP: listing events in test namespace +[AfterEach] [sig-instrumentation] Events API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:20:31.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "events-5384" for this suite. +•{"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":346,"completed":91,"skipped":1571,"failed":0} +S +------------------------------ +[sig-apps] Job + should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Job + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:20:32.179: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename job +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in job-1798 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a job +STEP: Ensuring job reaches completions +[AfterEach] [sig-apps] Job + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:20:41.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "job-1798" for this suite. +•{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":346,"completed":92,"skipped":1572,"failed":0} +SS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for multiple CRDs of different groups [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:20:41.362: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-34 +STEP: Waiting for a default service account to be provisioned in namespace +[It] works for multiple CRDs of different groups [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation +Oct 27 14:20:42.096: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 14:20:46.955: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:21:06.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-34" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":346,"completed":93,"skipped":1574,"failed":0} +SSS +------------------------------ +[sig-apps] ReplicationController + should test the lifecycle of a ReplicationController [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:21:06.285: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename replication-controller +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replication-controller-8516 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 +[It] should test the lifecycle of a ReplicationController [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a ReplicationController +STEP: waiting for RC to be added +STEP: waiting for available Replicas +STEP: patching ReplicationController +STEP: waiting for RC to be modified +STEP: patching ReplicationController status +STEP: waiting for RC to be modified +STEP: waiting for available Replicas +STEP: fetching ReplicationController status +STEP: patching ReplicationController scale +STEP: waiting for RC to be modified +STEP: waiting for ReplicationController's scale to be the max amount +STEP: fetching ReplicationController; ensuring that it's patched +STEP: updating ReplicationController status +STEP: waiting for RC to be modified +STEP: listing all ReplicationControllers +STEP: checking that ReplicationController has expected values +STEP: deleting ReplicationControllers by collection +STEP: waiting for ReplicationController to have a DELETED watchEvent +[AfterEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:21:11.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replication-controller-8516" for this suite. +•{"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","total":346,"completed":94,"skipped":1577,"failed":0} +SSS +------------------------------ +[sig-node] InitContainer [NodeConformance] + should invoke init containers on a RestartNever pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:21:12.086: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename init-container +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in init-container-7077 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 +[It] should invoke init containers on a RestartNever pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pod +Oct 27 14:21:12.819: INFO: PodSpec: initContainers in spec.initContainers +[AfterEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:21:16.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "init-container-7077" for this suite. +•{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":346,"completed":95,"skipped":1580,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:21:16.580: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-2773 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0644 on node default medium +Oct 27 14:21:17.407: INFO: Waiting up to 5m0s for pod "pod-42638f2f-91fb-470e-8af2-1ec87d16e747" in namespace "emptydir-2773" to be "Succeeded or Failed" +Oct 27 14:21:17.497: INFO: Pod "pod-42638f2f-91fb-470e-8af2-1ec87d16e747": Phase="Pending", Reason="", readiness=false. Elapsed: 90.026425ms +Oct 27 14:21:19.601: INFO: Pod "pod-42638f2f-91fb-470e-8af2-1ec87d16e747": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.193767211s +STEP: Saw pod success +Oct 27 14:21:19.601: INFO: Pod "pod-42638f2f-91fb-470e-8af2-1ec87d16e747" satisfied condition "Succeeded or Failed" +Oct 27 14:21:19.691: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod pod-42638f2f-91fb-470e-8af2-1ec87d16e747 container test-container: +STEP: delete the pod +Oct 27 14:21:19.922: INFO: Waiting for pod pod-42638f2f-91fb-470e-8af2-1ec87d16e747 to disappear +Oct 27 14:21:20.012: INFO: Pod pod-42638f2f-91fb-470e-8af2-1ec87d16e747 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:21:20.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-2773" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":96,"skipped":1605,"failed":0} +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir wrapper volumes + should not conflict [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir wrapper volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:21:20.283: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir-wrapper +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-wrapper-8109 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not conflict [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:21:21.383: INFO: The status of Pod pod-secrets-6387e4fc-fbff-4d48-b97d-683d76dcfb3e is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:21:23.474: INFO: The status of Pod pod-secrets-6387e4fc-fbff-4d48-b97d-683d76dcfb3e is Running (Ready = true) +STEP: Cleaning up the secret +STEP: Cleaning up the configmap +STEP: Cleaning up the pod +[AfterEach] [sig-storage] EmptyDir wrapper volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:21:23.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-wrapper-8109" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":346,"completed":97,"skipped":1625,"failed":0} +SSSSSSS +------------------------------ +[sig-node] Kubelet when scheduling a busybox command that always fails in a pod + should have an terminated reason [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:21:24.111: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubelet-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubelet-test-9158 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 +[BeforeEach] when scheduling a busybox command that always fails in a pod + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:82 +[It] should have an terminated reason [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:21:29.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubelet-test-9158" for this suite. +•{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":346,"completed":98,"skipped":1632,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:21:29.392: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-9638 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name configmap-test-volume-046002aa-4ea5-45cf-8686-9a11c1697680 +STEP: Creating a pod to test consume configMaps +Oct 27 14:21:30.311: INFO: Waiting up to 5m0s for pod "pod-configmaps-cf31b4cf-327c-4775-8e3d-c84fca63aa13" in namespace "configmap-9638" to be "Succeeded or Failed" +Oct 27 14:21:30.401: INFO: Pod "pod-configmaps-cf31b4cf-327c-4775-8e3d-c84fca63aa13": Phase="Pending", Reason="", readiness=false. Elapsed: 90.125171ms +Oct 27 14:21:32.492: INFO: Pod "pod-configmaps-cf31b4cf-327c-4775-8e3d-c84fca63aa13": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.181114135s +STEP: Saw pod success +Oct 27 14:21:32.492: INFO: Pod "pod-configmaps-cf31b4cf-327c-4775-8e3d-c84fca63aa13" satisfied condition "Succeeded or Failed" +Oct 27 14:21:32.582: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod pod-configmaps-cf31b4cf-327c-4775-8e3d-c84fca63aa13 container agnhost-container: +STEP: delete the pod +Oct 27 14:21:32.775: INFO: Waiting for pod pod-configmaps-cf31b4cf-327c-4775-8e3d-c84fca63aa13 to disappear +Oct 27 14:21:32.865: INFO: Pod pod-configmaps-cf31b4cf-327c-4775-8e3d-c84fca63aa13 no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:21:32.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-9638" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":346,"completed":99,"skipped":1643,"failed":0} +SSSSSSSSS +------------------------------ +[sig-network] DNS + should provide DNS for pods for Hostname [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:21:33.136: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename dns +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-4782 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a test headless service +STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-4782.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-4782.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4782.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-4782.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-4782.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4782.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done + +STEP: creating a pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Oct 27 14:21:39.130: INFO: DNS probes using dns-4782/dns-test-6fb01c26-081a-43d6-a1b0-e25a614a3e7e succeeded + +STEP: deleting the pod +STEP: deleting the test headless service +[AfterEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:21:39.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-4782" for this suite. +•{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":346,"completed":100,"skipped":1652,"failed":0} +SSSSSSSSS +------------------------------ +[sig-network] DNS + should provide DNS for pods for Subdomain [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:21:39.498: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename dns +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-3145 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide DNS for pods for Subdomain [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a test headless service +STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3145.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-3145.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3145.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3145.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-3145.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-3145.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-3145.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-3145.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3145.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3145.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-3145.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3145.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-3145.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-3145.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-3145.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-3145.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-3145.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3145.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done + +STEP: creating a pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Oct 27 14:21:44.818: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3145.svc.cluster.local from pod dns-3145/dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701: the server could not find the requested resource (get pods dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701) +Oct 27 14:21:44.953: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3145.svc.cluster.local from pod dns-3145/dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701: the server could not find the requested resource (get pods dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701) +Oct 27 14:21:45.047: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3145.svc.cluster.local from pod dns-3145/dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701: the server could not find the requested resource (get pods dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701) +Oct 27 14:21:45.140: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3145.svc.cluster.local from pod dns-3145/dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701: the server could not find the requested resource (get pods dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701) +Oct 27 14:21:45.420: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3145.svc.cluster.local from pod dns-3145/dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701: the server could not find the requested resource (get pods dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701) +Oct 27 14:21:45.516: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3145.svc.cluster.local from pod dns-3145/dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701: the server could not find the requested resource (get pods dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701) +Oct 27 14:21:45.609: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3145.svc.cluster.local from pod dns-3145/dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701: the server could not find the requested resource (get pods dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701) +Oct 27 14:21:45.702: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3145.svc.cluster.local from pod dns-3145/dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701: the server could not find the requested resource (get pods dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701) +Oct 27 14:21:45.888: INFO: Lookups using dns-3145/dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3145.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3145.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3145.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3145.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3145.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3145.svc.cluster.local jessie_udp@dns-test-service-2.dns-3145.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3145.svc.cluster.local] + +Oct 27 14:21:50.984: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3145.svc.cluster.local from pod dns-3145/dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701: the server could not find the requested resource (get pods dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701) +Oct 27 14:21:51.076: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3145.svc.cluster.local from pod dns-3145/dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701: the server could not find the requested resource (get pods dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701) +Oct 27 14:21:51.169: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3145.svc.cluster.local from pod dns-3145/dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701: the server could not find the requested resource (get pods dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701) +Oct 27 14:21:51.262: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3145.svc.cluster.local from pod dns-3145/dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701: the server could not find the requested resource (get pods dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701) +Oct 27 14:21:51.540: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3145.svc.cluster.local from pod dns-3145/dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701: the server could not find the requested resource (get pods dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701) +Oct 27 14:21:51.633: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3145.svc.cluster.local from pod dns-3145/dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701: the server could not find the requested resource (get pods dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701) +Oct 27 14:21:51.725: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3145.svc.cluster.local from pod dns-3145/dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701: the server could not find the requested resource (get pods dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701) +Oct 27 14:21:51.818: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3145.svc.cluster.local from pod dns-3145/dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701: the server could not find the requested resource (get pods dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701) +Oct 27 14:21:52.004: INFO: Lookups using dns-3145/dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3145.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3145.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3145.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3145.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3145.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3145.svc.cluster.local jessie_udp@dns-test-service-2.dns-3145.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3145.svc.cluster.local] + +Oct 27 14:21:55.982: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3145.svc.cluster.local from pod dns-3145/dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701: the server could not find the requested resource (get pods dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701) +Oct 27 14:21:56.075: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3145.svc.cluster.local from pod dns-3145/dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701: the server could not find the requested resource (get pods dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701) +Oct 27 14:21:56.167: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3145.svc.cluster.local from pod dns-3145/dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701: the server could not find the requested resource (get pods dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701) +Oct 27 14:21:56.260: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3145.svc.cluster.local from pod dns-3145/dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701: the server could not find the requested resource (get pods dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701) +Oct 27 14:21:56.538: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3145.svc.cluster.local from pod dns-3145/dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701: the server could not find the requested resource (get pods dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701) +Oct 27 14:21:56.631: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3145.svc.cluster.local from pod dns-3145/dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701: the server could not find the requested resource (get pods dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701) +Oct 27 14:21:56.724: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3145.svc.cluster.local from pod dns-3145/dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701: the server could not find the requested resource (get pods dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701) +Oct 27 14:21:56.817: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3145.svc.cluster.local from pod dns-3145/dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701: the server could not find the requested resource (get pods dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701) +Oct 27 14:21:57.003: INFO: Lookups using dns-3145/dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3145.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3145.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3145.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3145.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3145.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3145.svc.cluster.local jessie_udp@dns-test-service-2.dns-3145.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3145.svc.cluster.local] + +Oct 27 14:22:00.984: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3145.svc.cluster.local from pod dns-3145/dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701: the server could not find the requested resource (get pods dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701) +Oct 27 14:22:01.077: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3145.svc.cluster.local from pod dns-3145/dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701: the server could not find the requested resource (get pods dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701) +Oct 27 14:22:01.170: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3145.svc.cluster.local from pod dns-3145/dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701: the server could not find the requested resource (get pods dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701) +Oct 27 14:22:01.263: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3145.svc.cluster.local from pod dns-3145/dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701: the server could not find the requested resource (get pods dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701) +Oct 27 14:22:01.542: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3145.svc.cluster.local from pod dns-3145/dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701: the server could not find the requested resource (get pods dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701) +Oct 27 14:22:01.635: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3145.svc.cluster.local from pod dns-3145/dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701: the server could not find the requested resource (get pods dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701) +Oct 27 14:22:01.728: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3145.svc.cluster.local from pod dns-3145/dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701: the server could not find the requested resource (get pods dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701) +Oct 27 14:22:01.822: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3145.svc.cluster.local from pod dns-3145/dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701: the server could not find the requested resource (get pods dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701) +Oct 27 14:22:02.008: INFO: Lookups using dns-3145/dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3145.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3145.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3145.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3145.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3145.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3145.svc.cluster.local jessie_udp@dns-test-service-2.dns-3145.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3145.svc.cluster.local] + +Oct 27 14:22:05.982: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3145.svc.cluster.local from pod dns-3145/dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701: the server could not find the requested resource (get pods dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701) +Oct 27 14:22:06.075: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3145.svc.cluster.local from pod dns-3145/dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701: the server could not find the requested resource (get pods dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701) +Oct 27 14:22:06.167: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3145.svc.cluster.local from pod dns-3145/dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701: the server could not find the requested resource (get pods dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701) +Oct 27 14:22:06.260: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3145.svc.cluster.local from pod dns-3145/dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701: the server could not find the requested resource (get pods dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701) +Oct 27 14:22:06.539: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3145.svc.cluster.local from pod dns-3145/dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701: the server could not find the requested resource (get pods dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701) +Oct 27 14:22:06.632: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3145.svc.cluster.local from pod dns-3145/dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701: the server could not find the requested resource (get pods dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701) +Oct 27 14:22:06.725: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3145.svc.cluster.local from pod dns-3145/dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701: the server could not find the requested resource (get pods dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701) +Oct 27 14:22:06.818: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3145.svc.cluster.local from pod dns-3145/dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701: the server could not find the requested resource (get pods dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701) +Oct 27 14:22:07.004: INFO: Lookups using dns-3145/dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3145.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3145.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3145.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3145.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3145.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3145.svc.cluster.local jessie_udp@dns-test-service-2.dns-3145.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3145.svc.cluster.local] + +Oct 27 14:22:10.983: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3145.svc.cluster.local from pod dns-3145/dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701: the server could not find the requested resource (get pods dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701) +Oct 27 14:22:11.076: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3145.svc.cluster.local from pod dns-3145/dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701: the server could not find the requested resource (get pods dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701) +Oct 27 14:22:11.169: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3145.svc.cluster.local from pod dns-3145/dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701: the server could not find the requested resource (get pods dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701) +Oct 27 14:22:11.261: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3145.svc.cluster.local from pod dns-3145/dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701: the server could not find the requested resource (get pods dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701) +Oct 27 14:22:11.540: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3145.svc.cluster.local from pod dns-3145/dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701: the server could not find the requested resource (get pods dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701) +Oct 27 14:22:11.633: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3145.svc.cluster.local from pod dns-3145/dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701: the server could not find the requested resource (get pods dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701) +Oct 27 14:22:11.726: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3145.svc.cluster.local from pod dns-3145/dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701: the server could not find the requested resource (get pods dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701) +Oct 27 14:22:11.820: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3145.svc.cluster.local from pod dns-3145/dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701: the server could not find the requested resource (get pods dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701) +Oct 27 14:22:12.006: INFO: Lookups using dns-3145/dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3145.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3145.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3145.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3145.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3145.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3145.svc.cluster.local jessie_udp@dns-test-service-2.dns-3145.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3145.svc.cluster.local] + +Oct 27 14:22:17.052: INFO: DNS probes using dns-3145/dns-test-6c2f4c58-ef3f-4bff-902a-3b50f710e701 succeeded + +STEP: deleting the pod +STEP: deleting the test headless service +[AfterEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:22:17.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-3145" for this suite. +•{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":346,"completed":101,"skipped":1661,"failed":0} +S +------------------------------ +[sig-node] PodTemplates + should delete a collection of pod templates [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] PodTemplates + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:22:17.421: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename podtemplate +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in podtemplate-3183 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should delete a collection of pod templates [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Create set of pod templates +Oct 27 14:22:18.244: INFO: created test-podtemplate-1 +Oct 27 14:22:18.335: INFO: created test-podtemplate-2 +Oct 27 14:22:18.425: INFO: created test-podtemplate-3 +STEP: get a list of pod templates with a label in the current namespace +STEP: delete collection of pod templates +Oct 27 14:22:18.516: INFO: requesting DeleteCollection of pod templates +STEP: check that the list of pod templates matches the requested quantity +Oct 27 14:22:18.612: INFO: requesting list of pod templates to confirm quantity +[AfterEach] [sig-node] PodTemplates + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:22:18.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "podtemplate-3183" for this suite. +•{"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":346,"completed":102,"skipped":1662,"failed":0} +SSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + should include custom resource definition resources in discovery documents [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:22:18.884: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename custom-resource-definition +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in custom-resource-definition-3808 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should include custom resource definition resources in discovery documents [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: fetching the /apis discovery document +STEP: finding the apiextensions.k8s.io API group in the /apis discovery document +STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document +STEP: fetching the /apis/apiextensions.k8s.io discovery document +STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document +STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document +STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:22:19.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "custom-resource-definition-3808" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":346,"completed":103,"skipped":1669,"failed":0} +SSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should orphan pods created by rc if delete options say so [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:22:20.069: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename gc +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-2841 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should orphan pods created by rc if delete options say so [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the rc +STEP: delete the rc +STEP: wait for the rc to be deleted +STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods +STEP: Gathering metrics +Oct 27 14:23:01.543: INFO: For apiserver_request_total: +For apiserver_request_latency_seconds: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +W1027 14:23:01.543421 5725 metrics_grabber.go:151] Can't find kube-controller-manager pod. Grabbing metrics from kube-controller-manager is disabled. +Oct 27 14:23:01.543: INFO: Deleting pod "simpletest.rc-6mw27" in namespace "gc-2841" +Oct 27 14:23:01.637: INFO: Deleting pod "simpletest.rc-7ptpf" in namespace "gc-2841" +Oct 27 14:23:01.732: INFO: Deleting pod "simpletest.rc-7qsw6" in namespace "gc-2841" +Oct 27 14:23:01.825: INFO: Deleting pod "simpletest.rc-8vmgs" in namespace "gc-2841" +Oct 27 14:23:01.919: INFO: Deleting pod "simpletest.rc-c4hz6" in namespace "gc-2841" +Oct 27 14:23:02.015: INFO: Deleting pod "simpletest.rc-nztkn" in namespace "gc-2841" +Oct 27 14:23:02.108: INFO: Deleting pod "simpletest.rc-qt284" in namespace "gc-2841" +Oct 27 14:23:02.203: INFO: Deleting pod "simpletest.rc-tq7dn" in namespace "gc-2841" +Oct 27 14:23:02.297: INFO: Deleting pod "simpletest.rc-w4xkp" in namespace "gc-2841" +Oct 27 14:23:02.390: INFO: Deleting pod "simpletest.rc-wxsjj" in namespace "gc-2841" +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:23:02.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-2841" for this suite. +•{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":346,"completed":104,"skipped":1676,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Container Runtime blackbox test on terminated container + should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:23:02.667: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-runtime +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-4616 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the container +STEP: wait for the container to reach Succeeded +STEP: get the container status +STEP: the container should be terminated +STEP: the termination message should be set +Oct 27 14:23:05.865: INFO: Expected: &{} to match Container's Termination Message: -- +STEP: delete the container +[AfterEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:23:06.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-runtime-4616" for this suite. +•{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":346,"completed":105,"skipped":1717,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-network] DNS + should provide DNS for services [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:23:06.319: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename dns +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-3892 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide DNS for services [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a test headless service +STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3892.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3892.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3892.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3892.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3892.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3892.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3892.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-3892.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3892.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-3892.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3892.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 170.68.66.100.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/100.66.68.170_udp@PTR;check="$$(dig +tcp +noall +answer +search 170.68.66.100.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/100.66.68.170_tcp@PTR;sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3892.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3892.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3892.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3892.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3892.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3892.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3892.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-3892.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3892.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-3892.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3892.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 170.68.66.100.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/100.66.68.170_udp@PTR;check="$$(dig +tcp +noall +answer +search 170.68.66.100.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/100.66.68.170_tcp@PTR;sleep 1; done + +STEP: creating a pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Oct 27 14:23:09.753: INFO: Unable to read wheezy_udp@dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-a4f0dafc-9380-450e-b139-5381b7d27146: the server could not find the requested resource (get pods dns-test-a4f0dafc-9380-450e-b139-5381b7d27146) +Oct 27 14:23:09.847: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-a4f0dafc-9380-450e-b139-5381b7d27146: the server could not find the requested resource (get pods dns-test-a4f0dafc-9380-450e-b139-5381b7d27146) +Oct 27 14:23:09.941: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-a4f0dafc-9380-450e-b139-5381b7d27146: the server could not find the requested resource (get pods dns-test-a4f0dafc-9380-450e-b139-5381b7d27146) +Oct 27 14:23:10.077: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-a4f0dafc-9380-450e-b139-5381b7d27146: the server could not find the requested resource (get pods dns-test-a4f0dafc-9380-450e-b139-5381b7d27146) +Oct 27 14:23:10.729: INFO: Unable to read jessie_udp@dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-a4f0dafc-9380-450e-b139-5381b7d27146: the server could not find the requested resource (get pods dns-test-a4f0dafc-9380-450e-b139-5381b7d27146) +Oct 27 14:23:10.825: INFO: Unable to read jessie_tcp@dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-a4f0dafc-9380-450e-b139-5381b7d27146: the server could not find the requested resource (get pods dns-test-a4f0dafc-9380-450e-b139-5381b7d27146) +Oct 27 14:23:10.920: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-a4f0dafc-9380-450e-b139-5381b7d27146: the server could not find the requested resource (get pods dns-test-a4f0dafc-9380-450e-b139-5381b7d27146) +Oct 27 14:23:11.013: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-a4f0dafc-9380-450e-b139-5381b7d27146: the server could not find the requested resource (get pods dns-test-a4f0dafc-9380-450e-b139-5381b7d27146) +Oct 27 14:23:11.568: INFO: Lookups using dns-3892/dns-test-a4f0dafc-9380-450e-b139-5381b7d27146 failed for: [wheezy_udp@dns-test-service.dns-3892.svc.cluster.local wheezy_tcp@dns-test-service.dns-3892.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local jessie_udp@dns-test-service.dns-3892.svc.cluster.local jessie_tcp@dns-test-service.dns-3892.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local] + +Oct 27 14:23:16.662: INFO: Unable to read wheezy_udp@dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-a4f0dafc-9380-450e-b139-5381b7d27146: the server could not find the requested resource (get pods dns-test-a4f0dafc-9380-450e-b139-5381b7d27146) +Oct 27 14:23:16.754: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-a4f0dafc-9380-450e-b139-5381b7d27146: the server could not find the requested resource (get pods dns-test-a4f0dafc-9380-450e-b139-5381b7d27146) +Oct 27 14:23:16.847: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-a4f0dafc-9380-450e-b139-5381b7d27146: the server could not find the requested resource (get pods dns-test-a4f0dafc-9380-450e-b139-5381b7d27146) +Oct 27 14:23:16.940: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-a4f0dafc-9380-450e-b139-5381b7d27146: the server could not find the requested resource (get pods dns-test-a4f0dafc-9380-450e-b139-5381b7d27146) +Oct 27 14:23:17.696: INFO: Unable to read jessie_udp@dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-a4f0dafc-9380-450e-b139-5381b7d27146: the server could not find the requested resource (get pods dns-test-a4f0dafc-9380-450e-b139-5381b7d27146) +Oct 27 14:23:17.789: INFO: Unable to read jessie_tcp@dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-a4f0dafc-9380-450e-b139-5381b7d27146: the server could not find the requested resource (get pods dns-test-a4f0dafc-9380-450e-b139-5381b7d27146) +Oct 27 14:23:17.881: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-a4f0dafc-9380-450e-b139-5381b7d27146: the server could not find the requested resource (get pods dns-test-a4f0dafc-9380-450e-b139-5381b7d27146) +Oct 27 14:23:17.974: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-a4f0dafc-9380-450e-b139-5381b7d27146: the server could not find the requested resource (get pods dns-test-a4f0dafc-9380-450e-b139-5381b7d27146) +Oct 27 14:23:18.531: INFO: Lookups using dns-3892/dns-test-a4f0dafc-9380-450e-b139-5381b7d27146 failed for: [wheezy_udp@dns-test-service.dns-3892.svc.cluster.local wheezy_tcp@dns-test-service.dns-3892.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local jessie_udp@dns-test-service.dns-3892.svc.cluster.local jessie_tcp@dns-test-service.dns-3892.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local] + +Oct 27 14:23:21.665: INFO: Unable to read wheezy_udp@dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-a4f0dafc-9380-450e-b139-5381b7d27146: the server could not find the requested resource (get pods dns-test-a4f0dafc-9380-450e-b139-5381b7d27146) +Oct 27 14:23:21.758: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-a4f0dafc-9380-450e-b139-5381b7d27146: the server could not find the requested resource (get pods dns-test-a4f0dafc-9380-450e-b139-5381b7d27146) +Oct 27 14:23:21.850: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-a4f0dafc-9380-450e-b139-5381b7d27146: the server could not find the requested resource (get pods dns-test-a4f0dafc-9380-450e-b139-5381b7d27146) +Oct 27 14:23:21.943: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-a4f0dafc-9380-450e-b139-5381b7d27146: the server could not find the requested resource (get pods dns-test-a4f0dafc-9380-450e-b139-5381b7d27146) +Oct 27 14:23:22.603: INFO: Unable to read jessie_udp@dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-a4f0dafc-9380-450e-b139-5381b7d27146: the server could not find the requested resource (get pods dns-test-a4f0dafc-9380-450e-b139-5381b7d27146) +Oct 27 14:23:22.696: INFO: Unable to read jessie_tcp@dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-a4f0dafc-9380-450e-b139-5381b7d27146: the server could not find the requested resource (get pods dns-test-a4f0dafc-9380-450e-b139-5381b7d27146) +Oct 27 14:23:22.804: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-a4f0dafc-9380-450e-b139-5381b7d27146: the server could not find the requested resource (get pods dns-test-a4f0dafc-9380-450e-b139-5381b7d27146) +Oct 27 14:23:22.903: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-a4f0dafc-9380-450e-b139-5381b7d27146: the server could not find the requested resource (get pods dns-test-a4f0dafc-9380-450e-b139-5381b7d27146) +Oct 27 14:23:23.510: INFO: Lookups using dns-3892/dns-test-a4f0dafc-9380-450e-b139-5381b7d27146 failed for: [wheezy_udp@dns-test-service.dns-3892.svc.cluster.local wheezy_tcp@dns-test-service.dns-3892.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local jessie_udp@dns-test-service.dns-3892.svc.cluster.local jessie_tcp@dns-test-service.dns-3892.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local] + +Oct 27 14:23:26.664: INFO: Unable to read wheezy_udp@dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-a4f0dafc-9380-450e-b139-5381b7d27146: the server could not find the requested resource (get pods dns-test-a4f0dafc-9380-450e-b139-5381b7d27146) +Oct 27 14:23:26.756: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-a4f0dafc-9380-450e-b139-5381b7d27146: the server could not find the requested resource (get pods dns-test-a4f0dafc-9380-450e-b139-5381b7d27146) +Oct 27 14:23:26.849: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-a4f0dafc-9380-450e-b139-5381b7d27146: the server could not find the requested resource (get pods dns-test-a4f0dafc-9380-450e-b139-5381b7d27146) +Oct 27 14:23:26.942: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-a4f0dafc-9380-450e-b139-5381b7d27146: the server could not find the requested resource (get pods dns-test-a4f0dafc-9380-450e-b139-5381b7d27146) +Oct 27 14:23:27.595: INFO: Unable to read jessie_udp@dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-a4f0dafc-9380-450e-b139-5381b7d27146: the server could not find the requested resource (get pods dns-test-a4f0dafc-9380-450e-b139-5381b7d27146) +Oct 27 14:23:27.688: INFO: Unable to read jessie_tcp@dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-a4f0dafc-9380-450e-b139-5381b7d27146: the server could not find the requested resource (get pods dns-test-a4f0dafc-9380-450e-b139-5381b7d27146) +Oct 27 14:23:27.780: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-a4f0dafc-9380-450e-b139-5381b7d27146: the server could not find the requested resource (get pods dns-test-a4f0dafc-9380-450e-b139-5381b7d27146) +Oct 27 14:23:27.873: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-a4f0dafc-9380-450e-b139-5381b7d27146: the server could not find the requested resource (get pods dns-test-a4f0dafc-9380-450e-b139-5381b7d27146) +Oct 27 14:23:28.430: INFO: Lookups using dns-3892/dns-test-a4f0dafc-9380-450e-b139-5381b7d27146 failed for: [wheezy_udp@dns-test-service.dns-3892.svc.cluster.local wheezy_tcp@dns-test-service.dns-3892.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local jessie_udp@dns-test-service.dns-3892.svc.cluster.local jessie_tcp@dns-test-service.dns-3892.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local] + +Oct 27 14:23:31.665: INFO: Unable to read wheezy_udp@dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-a4f0dafc-9380-450e-b139-5381b7d27146: the server could not find the requested resource (get pods dns-test-a4f0dafc-9380-450e-b139-5381b7d27146) +Oct 27 14:23:31.757: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-a4f0dafc-9380-450e-b139-5381b7d27146: the server could not find the requested resource (get pods dns-test-a4f0dafc-9380-450e-b139-5381b7d27146) +Oct 27 14:23:31.851: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-a4f0dafc-9380-450e-b139-5381b7d27146: the server could not find the requested resource (get pods dns-test-a4f0dafc-9380-450e-b139-5381b7d27146) +Oct 27 14:23:31.944: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-a4f0dafc-9380-450e-b139-5381b7d27146: the server could not find the requested resource (get pods dns-test-a4f0dafc-9380-450e-b139-5381b7d27146) +Oct 27 14:23:32.595: INFO: Unable to read jessie_udp@dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-a4f0dafc-9380-450e-b139-5381b7d27146: the server could not find the requested resource (get pods dns-test-a4f0dafc-9380-450e-b139-5381b7d27146) +Oct 27 14:23:32.687: INFO: Unable to read jessie_tcp@dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-a4f0dafc-9380-450e-b139-5381b7d27146: the server could not find the requested resource (get pods dns-test-a4f0dafc-9380-450e-b139-5381b7d27146) +Oct 27 14:23:32.780: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-a4f0dafc-9380-450e-b139-5381b7d27146: the server could not find the requested resource (get pods dns-test-a4f0dafc-9380-450e-b139-5381b7d27146) +Oct 27 14:23:32.874: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-a4f0dafc-9380-450e-b139-5381b7d27146: the server could not find the requested resource (get pods dns-test-a4f0dafc-9380-450e-b139-5381b7d27146) +Oct 27 14:23:33.439: INFO: Lookups using dns-3892/dns-test-a4f0dafc-9380-450e-b139-5381b7d27146 failed for: [wheezy_udp@dns-test-service.dns-3892.svc.cluster.local wheezy_tcp@dns-test-service.dns-3892.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local jessie_udp@dns-test-service.dns-3892.svc.cluster.local jessie_tcp@dns-test-service.dns-3892.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local] + +Oct 27 14:23:36.663: INFO: Unable to read wheezy_udp@dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-a4f0dafc-9380-450e-b139-5381b7d27146: the server could not find the requested resource (get pods dns-test-a4f0dafc-9380-450e-b139-5381b7d27146) +Oct 27 14:23:36.756: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-a4f0dafc-9380-450e-b139-5381b7d27146: the server could not find the requested resource (get pods dns-test-a4f0dafc-9380-450e-b139-5381b7d27146) +Oct 27 14:23:36.849: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-a4f0dafc-9380-450e-b139-5381b7d27146: the server could not find the requested resource (get pods dns-test-a4f0dafc-9380-450e-b139-5381b7d27146) +Oct 27 14:23:36.942: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-a4f0dafc-9380-450e-b139-5381b7d27146: the server could not find the requested resource (get pods dns-test-a4f0dafc-9380-450e-b139-5381b7d27146) +Oct 27 14:23:37.597: INFO: Unable to read jessie_udp@dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-a4f0dafc-9380-450e-b139-5381b7d27146: the server could not find the requested resource (get pods dns-test-a4f0dafc-9380-450e-b139-5381b7d27146) +Oct 27 14:23:37.690: INFO: Unable to read jessie_tcp@dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-a4f0dafc-9380-450e-b139-5381b7d27146: the server could not find the requested resource (get pods dns-test-a4f0dafc-9380-450e-b139-5381b7d27146) +Oct 27 14:23:37.783: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-a4f0dafc-9380-450e-b139-5381b7d27146: the server could not find the requested resource (get pods dns-test-a4f0dafc-9380-450e-b139-5381b7d27146) +Oct 27 14:23:37.876: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-a4f0dafc-9380-450e-b139-5381b7d27146: the server could not find the requested resource (get pods dns-test-a4f0dafc-9380-450e-b139-5381b7d27146) +Oct 27 14:23:38.433: INFO: Lookups using dns-3892/dns-test-a4f0dafc-9380-450e-b139-5381b7d27146 failed for: [wheezy_udp@dns-test-service.dns-3892.svc.cluster.local wheezy_tcp@dns-test-service.dns-3892.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local jessie_udp@dns-test-service.dns-3892.svc.cluster.local jessie_tcp@dns-test-service.dns-3892.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local] + +Oct 27 14:23:43.497: INFO: DNS probes using dns-3892/dns-test-a4f0dafc-9380-450e-b139-5381b7d27146 succeeded + +STEP: deleting the pod +STEP: deleting the test service +STEP: deleting the test headless service +[AfterEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:23:43.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-3892" for this suite. +•{"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":346,"completed":106,"skipped":1727,"failed":0} +SSS +------------------------------ +[sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath + runs ReplicaSets to verify preemption running path [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:23:43.963: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename sched-preemption +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-preemption-570 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 +Oct 27 14:23:44.967: INFO: Waiting up to 1m0s for all nodes to be ready +Oct 27 14:24:45.786: INFO: Waiting for terminating namespaces to be deleted... +[BeforeEach] PreemptionExecutionPath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:24:45.876: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename sched-preemption-path +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-preemption-path-4043 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] PreemptionExecutionPath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:488 +STEP: Finding an available node +STEP: Trying to launch a pod without a label to get a node which can launch it. +STEP: Explicitly delete pod here to free the resource it takes. +Oct 27 14:24:49.076: INFO: found a healthy node: ip-10-250-28-25.ec2.internal +[It] runs ReplicaSets to verify preemption running path [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:25:00.441: INFO: pods created so far: [1 1 1] +Oct 27 14:25:00.442: INFO: length of pods created so far: 3 +Oct 27 14:25:02.627: INFO: pods created so far: [2 2 1] +[AfterEach] PreemptionExecutionPath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:25:09.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-preemption-path-4043" for this suite. +[AfterEach] PreemptionExecutionPath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:462 +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:25:10.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-preemption-570" for this suite. +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 +•{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":346,"completed":107,"skipped":1730,"failed":0} +SSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for multiple CRDs of same group but different versions [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:25:11.005: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-4264 +STEP: Waiting for a default service account to be provisioned in namespace +[It] works for multiple CRDs of same group but different versions [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation +Oct 27 14:25:11.738: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation +Oct 27 14:25:33.284: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 14:25:38.014: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:25:57.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-4264" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":346,"completed":108,"skipped":1733,"failed":0} +SSS +------------------------------ +[sig-cli] Kubectl client Kubectl describe + should check if kubectl describe prints relevant information for rc and pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:25:57.860: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-8288 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should check if kubectl describe prints relevant information for rc and pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:25:58.592: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-8288 create -f -' +Oct 27 14:26:00.006: INFO: stderr: "" +Oct 27 14:26:00.006: INFO: stdout: "replicationcontroller/agnhost-primary created\n" +Oct 27 14:26:00.007: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-8288 create -f -' +Oct 27 14:26:00.572: INFO: stderr: "" +Oct 27 14:26:00.572: INFO: stdout: "service/agnhost-primary created\n" +STEP: Waiting for Agnhost primary to start. +Oct 27 14:26:01.663: INFO: Selector matched 1 pods for map[app:agnhost] +Oct 27 14:26:01.663: INFO: Found 1 / 1 +Oct 27 14:26:01.663: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 +Oct 27 14:26:01.753: INFO: Selector matched 1 pods for map[app:agnhost] +Oct 27 14:26:01.753: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +Oct 27 14:26:01.753: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-8288 describe pod agnhost-primary-kzjbr' +Oct 27 14:26:02.268: INFO: stderr: "" +Oct 27 14:26:02.268: INFO: stdout: "Name: agnhost-primary-kzjbr\nNamespace: kubectl-8288\nPriority: 0\nNode: ip-10-250-28-25.ec2.internal/10.250.28.25\nStart Time: Wed, 27 Oct 2021 14:25:59 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: cni.projectcalico.org/containerID: de83afda0381ad18f1609ac45b82065d602cf155753d7f9940fbf60a8f356218\n cni.projectcalico.org/podIP: 100.96.1.145/32\n cni.projectcalico.org/podIPs: 100.96.1.145/32\n kubernetes.io/psp: e2e-test-privileged-psp\nStatus: Running\nIP: 100.96.1.145\nIPs:\n IP: 100.96.1.145\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: docker://24366f2da7a158302a4198cd937917fc858342d18746fb3d9da26102e4d711ca\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.32\n Image ID: docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Wed, 27 Oct 2021 14:26:01 +0000\n Ready: True\n Restart Count: 0\n Environment:\n KUBERNETES_SERVICE_HOST: api.tm94z-0j6.it.internal.staging.k8s.ondemand.com\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-c5c6n (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-c5c6n:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 2s default-scheduler Successfully assigned kubectl-8288/agnhost-primary-kzjbr to ip-10-250-28-25.ec2.internal\n Normal Pulled 2s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\" already present on machine\n Normal Created 2s kubelet Created container agnhost-primary\n Normal Started 1s kubelet Started container agnhost-primary\n" +Oct 27 14:26:02.268: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-8288 describe rc agnhost-primary' +Oct 27 14:26:02.872: INFO: stderr: "" +Oct 27 14:26:02.872: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-8288\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.32\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 3s replication-controller Created pod: agnhost-primary-kzjbr\n" +Oct 27 14:26:02.873: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-8288 describe service agnhost-primary' +Oct 27 14:26:03.469: INFO: stderr: "" +Oct 27 14:26:03.469: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-8288\nLabels: app=agnhost\n role=primary\nAnnotations: \nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 100.67.240.164\nIPs: 100.67.240.164\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 100.96.1.145:6379\nSession Affinity: None\nEvents: \n" +Oct 27 14:26:03.648: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-8288 describe node ip-10-250-28-25.ec2.internal' +Oct 27 14:26:04.541: INFO: stderr: "" +Oct 27 14:26:04.541: INFO: stdout: "Name: ip-10-250-28-25.ec2.internal\nRoles: \nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/instance-type=m5.large\n beta.kubernetes.io/os=linux\n failure-domain.beta.kubernetes.io/region=us-east-1\n failure-domain.beta.kubernetes.io/zone=us-east-1c\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=ip-10-250-28-25.ec2.internal\n kubernetes.io/os=linux\n node.kubernetes.io/instance-type=m5.large\n node.kubernetes.io/role=node\n topology.ebs.csi.aws.com/zone=us-east-1c\n topology.kubernetes.io/region=us-east-1\n topology.kubernetes.io/zone=us-east-1c\n worker.garden.sapcloud.io/group=worker-1\n worker.gardener.cloud/cri-name=docker\n worker.gardener.cloud/pool=worker-1\n worker.gardener.cloud/system-components=true\nAnnotations: checksum/cloud-config-data: cabf229cd0533424137ae6f3cb1effccf9cdf42962d37e6acc8c3f43c8edc24a\n csi.volume.kubernetes.io/nodeid: {\"ebs.csi.aws.com\":\"i-0e859d931d3c7dd79\"}\n node.alpha.kubernetes.io/ttl: 0\n node.machine.sapcloud.io/last-applied-anno-labels-taints:\n {\"metadata\":{\"creationTimestamp\":null,\"labels\":{\"node.kubernetes.io/role\":\"node\",\"topology.ebs.csi.aws.com/zone\":\"us-east-1c\",\"worker.gard...\n projectcalico.org/IPv4Address: 10.250.28.25/19\n projectcalico.org/IPv4IPIPTunnelAddr: 100.96.1.1\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Wed, 27 Oct 2021 13:53:34 +0000\nTaints: \nUnschedulable: false\nLease:\n HolderIdentity: ip-10-250-28-25.ec2.internal\n AcquireTime: \n RenewTime: Wed, 27 Oct 2021 14:26:03 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n FrequentContainerdRestart Unknown Wed, 27 Oct 2021 14:25:33 +0000 Wed, 27 Oct 2021 14:20:32 +0000 NoFrequentContainerdRestart error watching journald: failed to stat the log path \"/var/log/journal\": stat /v\n KernelDeadlock False Wed, 27 Oct 2021 14:25:33 +0000 Wed, 27 Oct 2021 14:20:31 +0000 KernelHasNoDeadlock kernel has no deadlock\n ReadonlyFilesystem False Wed, 27 Oct 2021 14:25:33 +0000 Wed, 27 Oct 2021 14:20:31 +0000 FilesystemIsNotReadOnly Filesystem is not read-only\n FrequentUnregisterNetDevice Unknown Wed, 27 Oct 2021 14:25:33 +0000 Wed, 27 Oct 2021 14:20:31 +0000 NoFrequentUnregisterNetDevice error watching journald: failed to stat the log path \"/var/log/journal\": stat /v\n FrequentKubeletRestart Unknown Wed, 27 Oct 2021 14:25:33 +0000 Wed, 27 Oct 2021 14:20:31 +0000 NoFrequentKubeletRestart error watching journald: failed to stat the log path \"/var/log/journal\": stat /v\n FrequentDockerRestart Unknown Wed, 27 Oct 2021 14:25:33 +0000 Wed, 27 Oct 2021 14:20:32 +0000 NoFrequentDockerRestart error watching journald: failed to stat the log path \"/var/log/journal\": stat /v\n NetworkUnavailable False Wed, 27 Oct 2021 13:55:46 +0000 Wed, 27 Oct 2021 13:55:46 +0000 CalicoIsUp Calico is running on this node\n MemoryPressure False Wed, 27 Oct 2021 14:25:54 +0000 Wed, 27 Oct 2021 13:53:34 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Wed, 27 Oct 2021 14:25:54 +0000 Wed, 27 Oct 2021 13:53:34 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Wed, 27 Oct 2021 14:25:54 +0000 Wed, 27 Oct 2021 13:53:34 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Wed, 27 Oct 2021 14:25:54 +0000 Wed, 27 Oct 2021 13:53:55 +0000 KubeletReady kubelet is posting ready status. AppArmor enabled\nAddresses:\n InternalIP: 10.250.28.25\n InternalDNS: ip-10-250-28-25.ec2.internal\n Hostname: ip-10-250-28-25.ec2.internal\nCapacity:\n cpu: 2\n ephemeral-storage: 31423468Ki\n example.com/fakecpu: 1k\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 7840276Ki\n pods: 110\nAllocatable:\n cpu: 1920m\n ephemeral-storage: 30568749647\n example.com/fakecpu: 1k\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 6689300Ki\n pods: 110\nSystem Info:\n Machine ID: ec2bcbcb6b95451d692e6cafa199fd88\n System UUID: ec2bcbcb-6b95-451d-692e-6cafa199fd88\n Boot ID: 4087c556-1a7e-4adb-b06a-1a78bff39335\n Kernel Version: 5.3.18-24.78-default\n OS Image: SUSE Linux Enterprise Server 15 SP2\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: docker://20.10.6-ce\n Kubelet Version: v1.22.2\n Kube-Proxy Version: v1.22.2\nPodCIDR: 100.96.1.0/24\nPodCIDRs: 100.96.1.0/24\nProviderID: aws:///us-east-1c/i-0e859d931d3c7dd79\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system addons-nginx-ingress-controller-b7784495c-9bd2v 100m (5%) 400m (20%) 128Mi (1%) 512Mi (7%) 29m\n kube-system apiserver-proxy-kb6fx 40m (2%) 400m (20%) 40Mi (0%) 500Mi (7%) 32m\n kube-system blackbox-exporter-65c549b94c-kw2mt 11m (0%) 44m (2%) 23574998 (0%) 94299992 (1%) 25m\n kube-system calico-node-pqn8p 250m (13%) 800m (41%) 100Mi (1%) 700Mi (10%) 30m\n kube-system csi-driver-node-ddm2w 40m (2%) 110m (5%) 114Mi (1%) 180Mi (2%) 32m\n kube-system kube-proxy-tnk6p 34m (1%) 92m (4%) 47753748 (0%) 145014992 (2%) 29m\n kube-system node-exporter-jhkvj 50m (2%) 150m (7%) 50Mi (0%) 150Mi (2%) 32m\n kube-system node-problem-detector-lscmn 11m (0%) 44m (2%) 23574998 (0%) 94299992 (1%) 5m35s\n kubectl-8288 agnhost-primary-kzjbr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5s\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 536m (27%) 2040m (106%)\n memory 547888576 (7%) 2474807168 (36%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\n example.com/fakecpu 0 0\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Starting 32m kubelet Starting kubelet.\n Normal NodeHasSufficientMemory 32m (x2 over 32m) kubelet Node ip-10-250-28-25.ec2.internal status is now: NodeHasSufficientMemory\n Normal NodeHasNoDiskPressure 32m (x2 over 32m) kubelet Node ip-10-250-28-25.ec2.internal status is now: NodeHasNoDiskPressure\n Normal NodeHasSufficientPID 32m (x2 over 32m) kubelet Node ip-10-250-28-25.ec2.internal status is now: NodeHasSufficientPID\n Normal NodeAllocatableEnforced 32m kubelet Updated Node Allocatable limit across pods\n Normal NodeReady 32m kubelet Node ip-10-250-28-25.ec2.internal status is now: NodeReady\n Normal NoFrequentUnregisterNetDevice 31m kernel-monitor Node condition FrequentUnregisterNetDevice is now: Unknown, reason: NoFrequentUnregisterNetDevice\n Normal NoFrequentKubeletRestart 31m systemd-monitor Node condition FrequentKubeletRestart is now: Unknown, reason: NoFrequentKubeletRestart\n Normal NoFrequentDockerRestart 31m systemd-monitor Node condition FrequentDockerRestart is now: Unknown, reason: NoFrequentDockerRestart\n Normal NoFrequentContainerdRestart 31m systemd-monitor Node condition FrequentContainerdRestart is now: Unknown, reason: NoFrequentContainerdRestart\n Normal NoFrequentUnregisterNetDevice 5m33s kernel-monitor Node condition FrequentUnregisterNetDevice is now: Unknown, reason: NoFrequentUnregisterNetDevice\n Normal NoFrequentKubeletRestart 5m33s systemd-monitor Node condition FrequentKubeletRestart is now: Unknown, reason: NoFrequentKubeletRestart\n Normal NoFrequentDockerRestart 5m32s systemd-monitor Node condition FrequentDockerRestart is now: Unknown, reason: NoFrequentDockerRestart\n Normal NoFrequentContainerdRestart 5m32s systemd-monitor Node condition FrequentContainerdRestart is now: Unknown, reason: NoFrequentContainerdRestart\n" +Oct 27 14:26:04.541: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-8288 describe namespace kubectl-8288' +Oct 27 14:26:05.138: INFO: stderr: "" +Oct 27 14:26:05.138: INFO: stdout: "Name: kubectl-8288\nLabels: e2e-framework=kubectl\n e2e-run=70495107-0c9e-4c12-bbb2-c9f041d8ff81\n kubernetes.io/metadata.name=kubectl-8288\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:26:05.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-8288" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":346,"completed":109,"skipped":1736,"failed":0} +SSSSSSS +------------------------------ +[sig-network] IngressClass API + should support creating IngressClass API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] IngressClass API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:26:05.408: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename ingressclass +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in ingressclass-9333 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] IngressClass API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingressclass.go:148 +[It] should support creating IngressClass API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: getting /apis +STEP: getting /apis/networking.k8s.io +STEP: getting /apis/networking.k8s.iov1 +STEP: creating +STEP: getting +STEP: listing +STEP: watching +Oct 27 14:26:06.950: INFO: starting watch +STEP: patching +STEP: updating +Oct 27 14:26:07.221: INFO: waiting for watch events with expected annotations +Oct 27 14:26:07.221: INFO: saw patched and updated annotations +STEP: deleting +STEP: deleting a collection +[AfterEach] [sig-network] IngressClass API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:26:07.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "ingressclass-9333" for this suite. +•{"msg":"PASSED [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]","total":346,"completed":110,"skipped":1743,"failed":0} +SSSSS +------------------------------ +[sig-apps] CronJob + should not schedule jobs when suspended [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:26:07.858: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename cronjob +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in cronjob-6789 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not schedule jobs when suspended [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a suspended cronjob +STEP: Ensuring no jobs are scheduled +STEP: Ensuring no job exists by listing jobs explicitly +STEP: Removing cronjob +[AfterEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:31:09.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "cronjob-6789" for this suite. + +• [SLOW TEST:301.456 seconds] +[sig-apps] CronJob +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should not schedule jobs when suspended [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","total":346,"completed":111,"skipped":1748,"failed":0} +SSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:31:09.314: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-8578 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name configmap-test-volume-map-58aceb6f-25c4-4492-aa49-8f6f99159a7a +STEP: Creating a pod to test consume configMaps +Oct 27 14:31:10.241: INFO: Waiting up to 5m0s for pod "pod-configmaps-a41330ff-8b02-4c29-acd4-d122eb6a7858" in namespace "configmap-8578" to be "Succeeded or Failed" +Oct 27 14:31:10.331: INFO: Pod "pod-configmaps-a41330ff-8b02-4c29-acd4-d122eb6a7858": Phase="Pending", Reason="", readiness=false. Elapsed: 90.162222ms +Oct 27 14:31:12.424: INFO: Pod "pod-configmaps-a41330ff-8b02-4c29-acd4-d122eb6a7858": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.183137034s +STEP: Saw pod success +Oct 27 14:31:12.424: INFO: Pod "pod-configmaps-a41330ff-8b02-4c29-acd4-d122eb6a7858" satisfied condition "Succeeded or Failed" +Oct 27 14:31:12.515: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod pod-configmaps-a41330ff-8b02-4c29-acd4-d122eb6a7858 container agnhost-container: +STEP: delete the pod +Oct 27 14:31:12.722: INFO: Waiting for pod pod-configmaps-a41330ff-8b02-4c29-acd4-d122eb6a7858 to disappear +Oct 27 14:31:12.812: INFO: Pod pod-configmaps-a41330ff-8b02-4c29-acd4-d122eb6a7858 no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:31:12.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-8578" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":346,"completed":112,"skipped":1757,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Kubelet when scheduling a busybox command that always fails in a pod + should be possible to delete [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:31:13.083: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubelet-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubelet-test-452 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 +[BeforeEach] when scheduling a busybox command that always fails in a pod + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:82 +[It] should be possible to delete [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:31:14.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubelet-test-452" for this suite. +•{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":346,"completed":113,"skipped":1797,"failed":0} +SSSS +------------------------------ +[sig-network] EndpointSlice + should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:31:14.185: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename endpointslice +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in endpointslice-4854 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 +[It] should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:31:15.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "endpointslice-4854" for this suite. +•{"msg":"PASSED [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","total":346,"completed":114,"skipped":1801,"failed":0} +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should include webhook resources in discovery documents [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:31:15.654: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-1644 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 27 14:31:17.772: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941877, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941877, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941877, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941877, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 27 14:31:20.959: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should include webhook resources in discovery documents [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: fetching the /apis discovery document +STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document +STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document +STEP: fetching the /apis/admissionregistration.k8s.io discovery document +STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document +STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document +STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:31:21.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-1644" for this suite. +STEP: Destroying namespace "webhook-1644-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":346,"completed":115,"skipped":1822,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Ingress API + should support creating Ingress API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Ingress API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:31:22.101: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename ingress +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in ingress-2217 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support creating Ingress API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: getting /apis +STEP: getting /apis/networking.k8s.io +STEP: getting /apis/networking.k8s.iov1 +STEP: creating +STEP: getting +STEP: listing +STEP: watching +Oct 27 14:31:23.646: INFO: starting watch +STEP: cluster-wide listing +STEP: cluster-wide watching +Oct 27 14:31:23.826: INFO: starting watch +STEP: patching +STEP: updating +Oct 27 14:31:24.189: INFO: waiting for watch events with expected annotations +Oct 27 14:31:24.189: INFO: saw patched and updated annotations +STEP: patching /status +STEP: updating /status +STEP: get /status +STEP: deleting +STEP: deleting a collection +[AfterEach] [sig-network] Ingress API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:31:25.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "ingress-2217" for this suite. +•{"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":346,"completed":116,"skipped":1849,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-auth] ServiceAccounts + should run through the lifecycle of a ServiceAccount [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:31:25.189: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename svcaccounts +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in svcaccounts-7223 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should run through the lifecycle of a ServiceAccount [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a ServiceAccount +STEP: watching for the ServiceAccount to be added +STEP: patching the ServiceAccount +STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) +STEP: deleting the ServiceAccount +[AfterEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:31:26.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svcaccounts-7223" for this suite. +•{"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":346,"completed":117,"skipped":1873,"failed":0} +SSSSSSSS +------------------------------ +[sig-node] ConfigMap + should run through a ConfigMap lifecycle [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:31:26.702: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-3434 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should run through a ConfigMap lifecycle [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a ConfigMap +STEP: fetching the ConfigMap +STEP: patching the ConfigMap +STEP: listing all ConfigMaps in all namespaces with a label selector +STEP: deleting the ConfigMap by collection with a label selector +STEP: listing all ConfigMaps in test namespace +[AfterEach] [sig-node] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:31:27.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-3434" for this suite. +•{"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":346,"completed":118,"skipped":1881,"failed":0} + +------------------------------ +[sig-api-machinery] ResourceQuota + should be able to update and delete ResourceQuota. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:31:28.175: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename resourcequota +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-8553 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to update and delete ResourceQuota. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a ResourceQuota +STEP: Getting a ResourceQuota +STEP: Updating a ResourceQuota +STEP: Verifying a ResourceQuota was modified +STEP: Deleting a ResourceQuota +STEP: Verifying the deleted ResourceQuota +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:31:29.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-8553" for this suite. +•{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":346,"completed":119,"skipped":1881,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-node] Variable Expansion + should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:31:29.634: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename var-expansion +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in var-expansion-6600 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:31:32.644: INFO: Deleting pod "var-expansion-f32e1aed-a456-4f2e-b0e4-3c88ddfb790d" in namespace "var-expansion-6600" +Oct 27 14:31:32.735: INFO: Wait up to 5m0s for pod "var-expansion-f32e1aed-a456-4f2e-b0e4-3c88ddfb790d" to be fully deleted +[AfterEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:31:36.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-6600" for this suite. +•{"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]","total":346,"completed":120,"skipped":1891,"failed":0} +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicaSet + should adopt matching pods on creation and release no longer matching pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:31:37.188: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename replicaset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replicaset-5489 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should adopt matching pods on creation and release no longer matching pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Given a Pod with a 'name' label pod-adoption-release is created +Oct 27 14:31:38.107: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:31:40.198: INFO: The status of Pod pod-adoption-release is Running (Ready = true) +STEP: When a replicaset with a matching selector is created +STEP: Then the orphan pod is adopted +STEP: When the matched label of one of its pods change +Oct 27 14:31:40.561: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 +STEP: Then the pod is released +[AfterEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:31:40.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replicaset-5489" for this suite. +•{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":346,"completed":121,"skipped":1913,"failed":0} +SSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:31:41.183: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-2050 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name configmap-test-volume-fa75261d-8177-4f6e-a409-19a46cd60af7 +STEP: Creating a pod to test consume configMaps +Oct 27 14:31:42.104: INFO: Waiting up to 5m0s for pod "pod-configmaps-e542dbe2-a5e5-45ad-8264-d1e3c2635b45" in namespace "configmap-2050" to be "Succeeded or Failed" +Oct 27 14:31:42.194: INFO: Pod "pod-configmaps-e542dbe2-a5e5-45ad-8264-d1e3c2635b45": Phase="Pending", Reason="", readiness=false. Elapsed: 90.244266ms +Oct 27 14:31:44.286: INFO: Pod "pod-configmaps-e542dbe2-a5e5-45ad-8264-d1e3c2635b45": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.181880754s +STEP: Saw pod success +Oct 27 14:31:44.286: INFO: Pod "pod-configmaps-e542dbe2-a5e5-45ad-8264-d1e3c2635b45" satisfied condition "Succeeded or Failed" +Oct 27 14:31:44.377: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod pod-configmaps-e542dbe2-a5e5-45ad-8264-d1e3c2635b45 container agnhost-container: +STEP: delete the pod +Oct 27 14:31:44.569: INFO: Waiting for pod pod-configmaps-e542dbe2-a5e5-45ad-8264-d1e3c2635b45 to disappear +Oct 27 14:31:44.659: INFO: Pod pod-configmaps-e542dbe2-a5e5-45ad-8264-d1e3c2635b45 no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:31:44.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-2050" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":346,"completed":122,"skipped":1919,"failed":0} +SSSSS +------------------------------ +[sig-node] PodTemplates + should run the lifecycle of PodTemplates [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] PodTemplates + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:31:44.930: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename podtemplate +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in podtemplate-836 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should run the lifecycle of PodTemplates [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-node] PodTemplates + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:31:46.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "podtemplate-836" for this suite. +•{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":346,"completed":123,"skipped":1924,"failed":0} +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should mutate custom resource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:31:46.478: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-463 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 27 14:31:48.123: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941907, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941907, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941907, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941907, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 27 14:31:51.310: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should mutate custom resource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:31:51.400: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4525-crds.webhook.example.com via the AdmissionRegistration API +STEP: Creating a custom resource that should be mutated by the webhook +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:31:54.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-463" for this suite. +STEP: Destroying namespace "webhook-463-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":346,"completed":124,"skipped":1941,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Events + should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Events + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:31:55.100: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename events +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in events-847 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pod +STEP: submitting the pod to kubernetes +STEP: verifying the pod is in kubernetes +STEP: retrieving the pod +Oct 27 14:31:58.439: INFO: &Pod{ObjectMeta:{send-events-61420bd9-ab24-46d1-845f-e914123eeaa7 events-847 dc4548c4-65d2-430c-a090-670cd7c93def 17398 0 2021-10-27 14:31:56 +0000 UTC map[name:foo time:979823566] map[cni.projectcalico.org/containerID:4a5e2a6e2c704a8dde14cf9d91f03bd72e4ffab7266f50b50022dbc27ed43a29 cni.projectcalico.org/podIP:100.96.1.154/32 cni.projectcalico.org/podIPs:100.96.1.154/32 kubernetes.io/psp:e2e-test-privileged-psp] [] [] [{calico Update v1 2021-10-27 14:31:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {e2e.test Update v1 2021-10-27 14:31:56 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 14:31:57 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.1.154\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-74kjc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:p,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tm94z-0j6.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-74kjc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-28-25.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:31:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:31:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:31:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:31:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.28.25,PodIP:100.96.1.154,StartTime:2021-10-27 14:31:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-27 14:31:57 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:docker://795f0740d08b9559a2077020a949a17cccf62e592d635bd325ecbc7e381bccbc,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.1.154,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + +STEP: checking for scheduler event about the pod +Oct 27 14:32:00.529: INFO: Saw scheduler event for our pod. +STEP: checking for kubelet event about the pod +Oct 27 14:32:02.621: INFO: Saw kubelet event for our pod. +STEP: deleting the pod +[AfterEach] [sig-node] Events + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:32:02.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "events-847" for this suite. +•{"msg":"PASSED [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":346,"completed":125,"skipped":1975,"failed":0} + +------------------------------ +[sig-cli] Kubectl client Kubectl run pod + should create a pod from an image when restart is Never [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:32:02.984: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-2110 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[BeforeEach] Kubectl run pod + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1524 +[It] should create a pod from an image when restart is Never [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 +Oct 27 14:32:03.719: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-2110 run e2e-test-httpd-pod --restart=Never --pod-running-timeout=2m0s --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-1' +Oct 27 14:32:04.062: INFO: stderr: "" +Oct 27 14:32:04.062: INFO: stdout: "pod/e2e-test-httpd-pod created\n" +STEP: verifying the pod e2e-test-httpd-pod was created +[AfterEach] Kubectl run pod + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1528 +Oct 27 14:32:04.153: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-2110 delete pods e2e-test-httpd-pod' +Oct 27 14:32:06.397: INFO: stderr: "" +Oct 27 14:32:06.397: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:32:06.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-2110" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":346,"completed":126,"skipped":1975,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces + should list and delete a collection of PodDisruptionBudgets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:32:06.668: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename disruption +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in disruption-9627 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 +[BeforeEach] Listing PodDisruptionBudgets for all namespaces + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:32:07.400: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename disruption-2 +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in disruption-2-8029 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should list and delete a collection of PodDisruptionBudgets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Waiting for the pdb to be processed +STEP: Waiting for the pdb to be processed +STEP: Waiting for the pdb to be processed +STEP: listing a collection of PDBs across all namespaces +STEP: listing a collection of PDBs in namespace disruption-9627 +STEP: deleting a collection of PDBs +STEP: Waiting for the PDB collection to be deleted +[AfterEach] Listing PodDisruptionBudgets for all namespaces + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:32:09.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "disruption-2-8029" for this suite. +[AfterEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:32:09.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "disruption-9627" for this suite. +•{"msg":"PASSED [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]","total":346,"completed":127,"skipped":2001,"failed":0} +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should run and stop simple daemon [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:32:09.405: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename daemonsets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in daemonsets-351 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:142 +[It] should run and stop simple daemon [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating simple DaemonSet "daemon-set" +STEP: Check that daemon pods launch on every node of the cluster. +Oct 27 14:32:10.772: INFO: Number of nodes with available pods: 0 +Oct 27 14:32:10.772: INFO: Node ip-10-250-28-25.ec2.internal is running more than one daemon pod +Oct 27 14:32:12.041: INFO: Number of nodes with available pods: 0 +Oct 27 14:32:12.041: INFO: Node ip-10-250-28-25.ec2.internal is running more than one daemon pod +Oct 27 14:32:13.041: INFO: Number of nodes with available pods: 2 +Oct 27 14:32:13.042: INFO: Number of running nodes: 2, number of available pods: 2 +STEP: Stop a daemon pod, check that the daemon pod is revived. +Oct 27 14:32:13.497: INFO: Number of nodes with available pods: 1 +Oct 27 14:32:13.497: INFO: Node ip-10-250-9-48.ec2.internal is running more than one daemon pod +Oct 27 14:32:14.802: INFO: Number of nodes with available pods: 1 +Oct 27 14:32:14.802: INFO: Node ip-10-250-9-48.ec2.internal is running more than one daemon pod +Oct 27 14:32:15.768: INFO: Number of nodes with available pods: 1 +Oct 27 14:32:15.769: INFO: Node ip-10-250-9-48.ec2.internal is running more than one daemon pod +Oct 27 14:32:16.768: INFO: Number of nodes with available pods: 1 +Oct 27 14:32:16.768: INFO: Node ip-10-250-9-48.ec2.internal is running more than one daemon pod +Oct 27 14:32:17.767: INFO: Number of nodes with available pods: 1 +Oct 27 14:32:17.767: INFO: Node ip-10-250-9-48.ec2.internal is running more than one daemon pod +Oct 27 14:32:18.767: INFO: Number of nodes with available pods: 2 +Oct 27 14:32:18.767: INFO: Number of running nodes: 2, number of available pods: 2 +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:108 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-351, will wait for the garbage collector to delete the pods +Oct 27 14:32:19.139: INFO: Deleting DaemonSet.extensions daemon-set took: 90.739326ms +Oct 27 14:32:19.240: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.608951ms +Oct 27 14:32:21.030: INFO: Number of nodes with available pods: 0 +Oct 27 14:32:21.030: INFO: Number of running nodes: 0, number of available pods: 0 +Oct 27 14:32:21.120: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"17630"},"items":null} + +Oct 27 14:32:21.210: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"17630"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:32:21.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-351" for this suite. +•{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":346,"completed":128,"skipped":2019,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Pods + should contain environment variables for services [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:32:21.671: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename pods +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-7984 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:188 +[It] should contain environment variables for services [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:32:22.591: INFO: The status of Pod server-envvars-bf11f8b8-ed71-4e30-b680-f87ffbdc35fc is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:32:24.681: INFO: The status of Pod server-envvars-bf11f8b8-ed71-4e30-b680-f87ffbdc35fc is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:32:26.682: INFO: The status of Pod server-envvars-bf11f8b8-ed71-4e30-b680-f87ffbdc35fc is Running (Ready = true) +Oct 27 14:32:27.005: INFO: Waiting up to 5m0s for pod "client-envvars-2cbf6266-2c5b-498f-a55e-df3a147eb902" in namespace "pods-7984" to be "Succeeded or Failed" +Oct 27 14:32:27.101: INFO: Pod "client-envvars-2cbf6266-2c5b-498f-a55e-df3a147eb902": Phase="Pending", Reason="", readiness=false. Elapsed: 95.89661ms +Oct 27 14:32:29.192: INFO: Pod "client-envvars-2cbf6266-2c5b-498f-a55e-df3a147eb902": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.187595459s +STEP: Saw pod success +Oct 27 14:32:29.192: INFO: Pod "client-envvars-2cbf6266-2c5b-498f-a55e-df3a147eb902" satisfied condition "Succeeded or Failed" +Oct 27 14:32:29.283: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod client-envvars-2cbf6266-2c5b-498f-a55e-df3a147eb902 container env3cont: +STEP: delete the pod +Oct 27 14:32:29.474: INFO: Waiting for pod client-envvars-2cbf6266-2c5b-498f-a55e-df3a147eb902 to disappear +Oct 27 14:32:29.564: INFO: Pod client-envvars-2cbf6266-2c5b-498f-a55e-df3a147eb902 no longer exists +[AfterEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:32:29.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-7984" for this suite. +•{"msg":"PASSED [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":346,"completed":129,"skipped":2050,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook + should execute poststart http hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Container Lifecycle Hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:32:29.835: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-lifecycle-hook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-lifecycle-hook-5285 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] when create a pod with lifecycle hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 +STEP: create the container to handle the HTTPGet hook request. +Oct 27 14:32:30.753: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:32:32.846: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) +[It] should execute poststart http hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the pod with lifecycle hook +Oct 27 14:32:33.123: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:32:35.214: INFO: The status of Pod pod-with-poststart-http-hook is Running (Ready = true) +STEP: check poststart hook +STEP: delete the pod with lifecycle hook +Oct 27 14:32:35.492: INFO: Waiting for pod pod-with-poststart-http-hook to disappear +Oct 27 14:32:35.583: INFO: Pod pod-with-poststart-http-hook still exists +Oct 27 14:32:37.584: INFO: Waiting for pod pod-with-poststart-http-hook to disappear +Oct 27 14:32:37.675: INFO: Pod pod-with-poststart-http-hook no longer exists +[AfterEach] [sig-node] Container Lifecycle Hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:32:37.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-lifecycle-hook-5285" for this suite. +•{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":346,"completed":130,"skipped":2088,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints + verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:32:37.947: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename sched-preemption +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-preemption-9161 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 +Oct 27 14:32:38.953: INFO: Waiting up to 1m0s for all nodes to be ready +Oct 27 14:33:39.689: INFO: Waiting for terminating namespaces to be deleted... +[BeforeEach] PriorityClass endpoints + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:33:39.779: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename sched-preemption-path +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-preemption-path-7150 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] PriorityClass endpoints + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:679 +[It] verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:33:40.784: INFO: PriorityClass.scheduling.k8s.io "p1" is invalid: Value: Forbidden: may not be changed in an update. +Oct 27 14:33:40.875: INFO: PriorityClass.scheduling.k8s.io "p2" is invalid: Value: Forbidden: may not be changed in an update. +[AfterEach] PriorityClass endpoints + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:33:41.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-preemption-path-7150" for this suite. +[AfterEach] PriorityClass endpoints + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:693 +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:33:41.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-preemption-9161" for this suite. +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 +•{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]","total":346,"completed":131,"skipped":2128,"failed":0} +SSS +------------------------------ +[sig-node] Pods + should support retrieving logs from the container over websockets [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:33:42.253: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename pods +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-4616 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:188 +[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:33:42.988: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: creating the pod +STEP: submitting the pod to kubernetes +Oct 27 14:33:43.175: INFO: The status of Pod pod-logs-websocket-ef114dd7-b66c-4d48-963d-f5a1b56270bf is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:33:45.268: INFO: The status of Pod pod-logs-websocket-ef114dd7-b66c-4d48-963d-f5a1b56270bf is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:33:47.267: INFO: The status of Pod pod-logs-websocket-ef114dd7-b66c-4d48-963d-f5a1b56270bf is Running (Ready = true) +[AfterEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:33:47.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-4616" for this suite. +•{"msg":"PASSED [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":346,"completed":132,"skipped":2131,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-instrumentation] Events + should ensure that an event can be fetched, patched, deleted, and listed [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-instrumentation] Events + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:33:47.909: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename events +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in events-1460 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a test event +STEP: listing all events in all namespaces +STEP: patching the test event +STEP: fetching the test event +STEP: deleting the test event +STEP: listing all events in all namespaces +[AfterEach] [sig-instrumentation] Events + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:33:49.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "events-1460" for this suite. +•{"msg":"PASSED [sig-instrumentation] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":346,"completed":133,"skipped":2146,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicaSet + Replace and Patch tests [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:33:49.384: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename replicaset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replicaset-9447 +STEP: Waiting for a default service account to be provisioned in namespace +[It] Replace and Patch tests [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:33:50.388: INFO: Pod name sample-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running +STEP: Scaling up "test-rs" replicaset +Oct 27 14:33:52.752: INFO: Updating replica set "test-rs" +STEP: patching the ReplicaSet +Oct 27 14:33:52.933: INFO: observed ReplicaSet test-rs in namespace replicaset-9447 with ReadyReplicas 1, AvailableReplicas 1 +Oct 27 14:33:52.933: INFO: observed ReplicaSet test-rs in namespace replicaset-9447 with ReadyReplicas 1, AvailableReplicas 1 +Oct 27 14:33:52.933: INFO: observed ReplicaSet test-rs in namespace replicaset-9447 with ReadyReplicas 1, AvailableReplicas 1 +Oct 27 14:33:54.518: INFO: observed ReplicaSet test-rs in namespace replicaset-9447 with ReadyReplicas 2, AvailableReplicas 2 +Oct 27 14:33:54.651: INFO: observed Replicaset test-rs in namespace replicaset-9447 with ReadyReplicas 3 found true +[AfterEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:33:54.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replicaset-9447" for this suite. +•{"msg":"PASSED [sig-apps] ReplicaSet Replace and Patch tests [Conformance]","total":346,"completed":134,"skipped":2173,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + removes definition from spec when one version gets changed to not be served [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:33:54.922: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-6746 +STEP: Waiting for a default service account to be provisioned in namespace +[It] removes definition from spec when one version gets changed to not be served [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: set up a multi version CRD +Oct 27 14:33:55.654: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: mark a version not serverd +STEP: check the unserved version gets removed +STEP: check the other version is not changed +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:34:24.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-6746" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":346,"completed":135,"skipped":2200,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should not be blocked by dependency circle [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:34:24.808: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename gc +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-507 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not be blocked by dependency circle [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:34:25.919: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"0ee87178-9e9e-4c0a-ac07-00d4e51ca001", Controller:(*bool)(0xc004a2270e), BlockOwnerDeletion:(*bool)(0xc004a2270f)}} +Oct 27 14:34:26.011: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"ee66fcfe-d6b6-4f06-ad80-e7717654158f", Controller:(*bool)(0xc0049ddb3e), BlockOwnerDeletion:(*bool)(0xc0049ddb3f)}} +Oct 27 14:34:26.103: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"230a174f-23a1-4626-9e97-6c14e51d597c", Controller:(*bool)(0xc0049fbc7e), BlockOwnerDeletion:(*bool)(0xc0049fbc7f)}} +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:34:31.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-507" for this suite. +•{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":346,"completed":136,"skipped":2212,"failed":0} +SSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:34:31.570: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-3556 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name projected-configmap-test-volume-map-6c27c30e-05f6-4ea4-a77f-d847e0c68b09 +STEP: Creating a pod to test consume configMaps +Oct 27 14:34:32.489: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-24c83fba-228b-4b48-a8a7-20fa6bb603b5" in namespace "projected-3556" to be "Succeeded or Failed" +Oct 27 14:34:32.580: INFO: Pod "pod-projected-configmaps-24c83fba-228b-4b48-a8a7-20fa6bb603b5": Phase="Pending", Reason="", readiness=false. Elapsed: 90.418646ms +Oct 27 14:34:34.671: INFO: Pod "pod-projected-configmaps-24c83fba-228b-4b48-a8a7-20fa6bb603b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.181727377s +STEP: Saw pod success +Oct 27 14:34:34.671: INFO: Pod "pod-projected-configmaps-24c83fba-228b-4b48-a8a7-20fa6bb603b5" satisfied condition "Succeeded or Failed" +Oct 27 14:34:34.761: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod pod-projected-configmaps-24c83fba-228b-4b48-a8a7-20fa6bb603b5 container agnhost-container: +STEP: delete the pod +Oct 27 14:34:34.992: INFO: Waiting for pod pod-projected-configmaps-24c83fba-228b-4b48-a8a7-20fa6bb603b5 to disappear +Oct 27 14:34:35.082: INFO: Pod pod-projected-configmaps-24c83fba-228b-4b48-a8a7-20fa6bb603b5 no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:34:35.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-3556" for this suite. +•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":137,"skipped":2215,"failed":0} +S +------------------------------ +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition + listing custom resource definition objects works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:34:35.352: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename custom-resource-definition +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in custom-resource-definition-2245 +STEP: Waiting for a default service account to be provisioned in namespace +[It] listing custom resource definition objects works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:34:36.084: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:34:43.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "custom-resource-definition-2245" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":346,"completed":138,"skipped":2216,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Proxy version v1 + A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] version v1 + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:34:43.283: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename proxy +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in proxy-1840 +STEP: Waiting for a default service account to be provisioned in namespace +[It] A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:34:44.200: INFO: Creating pod... +Oct 27 14:34:44.401: INFO: Pod Quantity: 1 Status: Pending +Oct 27 14:34:45.501: INFO: Pod Quantity: 1 Status: Pending +Oct 27 14:34:46.492: INFO: Pod Quantity: 1 Status: Pending +Oct 27 14:34:47.492: INFO: Pod Status: Running +Oct 27 14:34:47.492: INFO: Creating service... +Oct 27 14:34:47.588: INFO: Starting http.Client for https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com/api/v1/namespaces/proxy-1840/pods/agnhost/proxy/some/path/with/DELETE +Oct 27 14:34:47.689: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE +Oct 27 14:34:47.689: INFO: Starting http.Client for https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com/api/v1/namespaces/proxy-1840/pods/agnhost/proxy/some/path/with/GET +Oct 27 14:34:47.825: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET +Oct 27 14:34:47.825: INFO: Starting http.Client for https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com/api/v1/namespaces/proxy-1840/pods/agnhost/proxy/some/path/with/HEAD +Oct 27 14:34:47.920: INFO: http.Client request:HEAD | StatusCode:200 +Oct 27 14:34:47.920: INFO: Starting http.Client for https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com/api/v1/namespaces/proxy-1840/pods/agnhost/proxy/some/path/with/OPTIONS +Oct 27 14:34:48.015: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS +Oct 27 14:34:48.015: INFO: Starting http.Client for https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com/api/v1/namespaces/proxy-1840/pods/agnhost/proxy/some/path/with/PATCH +Oct 27 14:34:48.109: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH +Oct 27 14:34:48.109: INFO: Starting http.Client for https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com/api/v1/namespaces/proxy-1840/pods/agnhost/proxy/some/path/with/POST +Oct 27 14:34:48.300: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST +Oct 27 14:34:48.300: INFO: Starting http.Client for https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com/api/v1/namespaces/proxy-1840/pods/agnhost/proxy/some/path/with/PUT +Oct 27 14:34:48.406: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT +Oct 27 14:34:48.407: INFO: Starting http.Client for https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com/api/v1/namespaces/proxy-1840/services/test-service/proxy/some/path/with/DELETE +Oct 27 14:34:48.504: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE +Oct 27 14:34:48.504: INFO: Starting http.Client for https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com/api/v1/namespaces/proxy-1840/services/test-service/proxy/some/path/with/GET +Oct 27 14:34:48.604: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET +Oct 27 14:34:48.604: INFO: Starting http.Client for https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com/api/v1/namespaces/proxy-1840/services/test-service/proxy/some/path/with/HEAD +Oct 27 14:34:48.704: INFO: http.Client request:HEAD | StatusCode:200 +Oct 27 14:34:48.704: INFO: Starting http.Client for https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com/api/v1/namespaces/proxy-1840/services/test-service/proxy/some/path/with/OPTIONS +Oct 27 14:34:48.803: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS +Oct 27 14:34:48.803: INFO: Starting http.Client for https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com/api/v1/namespaces/proxy-1840/services/test-service/proxy/some/path/with/PATCH +Oct 27 14:34:48.903: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH +Oct 27 14:34:48.903: INFO: Starting http.Client for https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com/api/v1/namespaces/proxy-1840/services/test-service/proxy/some/path/with/POST +Oct 27 14:34:48.999: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST +Oct 27 14:34:48.999: INFO: Starting http.Client for https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com/api/v1/namespaces/proxy-1840/services/test-service/proxy/some/path/with/PUT +Oct 27 14:34:49.093: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT +[AfterEach] version v1 + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:34:49.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "proxy-1840" for this suite. +•{"msg":"PASSED [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]","total":346,"completed":139,"skipped":2276,"failed":0} +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should mutate custom resource with pruning [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:34:49.365: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-2730 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 27 14:34:51.132: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942090, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942090, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942090, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942090, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 27 14:34:54.321: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should mutate custom resource with pruning [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:34:54.412: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4632-crds.webhook.example.com via the AdmissionRegistration API +STEP: Creating a custom resource that should be mutated by the webhook +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:34:57.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-2730" for this suite. +STEP: Destroying namespace "webhook-2730-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":346,"completed":140,"skipped":2296,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for CRD with validation schema [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:34:58.425: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-5701 +STEP: Waiting for a default service account to be provisioned in namespace +[It] works for CRD with validation schema [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:34:59.280: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: client-side validation (kubectl create and apply) allows request with known and required properties +Oct 27 14:35:04.134: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-5701 --namespace=crd-publish-openapi-5701 create -f -' +Oct 27 14:35:05.706: INFO: stderr: "" +Oct 27 14:35:05.706: INFO: stdout: "e2e-test-crd-publish-openapi-9377-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" +Oct 27 14:35:05.706: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-5701 --namespace=crd-publish-openapi-5701 delete e2e-test-crd-publish-openapi-9377-crds test-foo' +Oct 27 14:35:06.127: INFO: stderr: "" +Oct 27 14:35:06.127: INFO: stdout: "e2e-test-crd-publish-openapi-9377-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" +Oct 27 14:35:06.127: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-5701 --namespace=crd-publish-openapi-5701 apply -f -' +Oct 27 14:35:06.826: INFO: stderr: "" +Oct 27 14:35:06.826: INFO: stdout: "e2e-test-crd-publish-openapi-9377-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" +Oct 27 14:35:06.826: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-5701 --namespace=crd-publish-openapi-5701 delete e2e-test-crd-publish-openapi-9377-crds test-foo' +Oct 27 14:35:07.239: INFO: stderr: "" +Oct 27 14:35:07.239: INFO: stdout: "e2e-test-crd-publish-openapi-9377-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" +STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema +Oct 27 14:35:07.239: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-5701 --namespace=crd-publish-openapi-5701 create -f -' +Oct 27 14:35:07.681: INFO: rc: 1 +Oct 27 14:35:07.681: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-5701 --namespace=crd-publish-openapi-5701 apply -f -' +Oct 27 14:35:08.095: INFO: rc: 1 +STEP: client-side validation (kubectl create and apply) rejects request without required properties +Oct 27 14:35:08.095: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-5701 --namespace=crd-publish-openapi-5701 create -f -' +Oct 27 14:35:08.507: INFO: rc: 1 +Oct 27 14:35:08.507: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-5701 --namespace=crd-publish-openapi-5701 apply -f -' +Oct 27 14:35:08.933: INFO: rc: 1 +STEP: kubectl explain works to explain CR properties +Oct 27 14:35:08.933: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-5701 explain e2e-test-crd-publish-openapi-9377-crds' +Oct 27 14:35:09.360: INFO: stderr: "" +Oct 27 14:35:09.360: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-9377-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" +STEP: kubectl explain works to explain CR properties recursively +Oct 27 14:35:09.361: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-5701 explain e2e-test-crd-publish-openapi-9377-crds.metadata' +Oct 27 14:35:09.796: INFO: stderr: "" +Oct 27 14:35:09.796: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-9377-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n NOT return a 409 - instead, it will either return 201 Created or 500 with\n Reason ServerTimeout indicating a unique name could not be found in the\n time allotted, and the client should retry (optionally after the time\n indicated in the Retry-After header).\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only.\n\n DEPRECATED Kubernetes will stop propagating this field in 1.20 release and\n the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" +Oct 27 14:35:09.797: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-5701 explain e2e-test-crd-publish-openapi-9377-crds.spec' +Oct 27 14:35:10.210: INFO: stderr: "" +Oct 27 14:35:10.210: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-9377-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" +Oct 27 14:35:10.210: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-5701 explain e2e-test-crd-publish-openapi-9377-crds.spec.bars' +Oct 27 14:35:10.638: INFO: stderr: "" +Oct 27 14:35:10.638: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-9377-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" +STEP: kubectl explain works to return error when explain is called on property that doesn't exist +Oct 27 14:35:10.639: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-5701 explain e2e-test-crd-publish-openapi-9377-crds.spec.bars2' +Oct 27 14:35:11.072: INFO: rc: 1 +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:35:15.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-5701" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":346,"completed":141,"skipped":2320,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:35:15.611: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-6624 +STEP: Waiting for a default service account to be provisioned in namespace +[It] optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating secret with name s-test-opt-del-52dadb25-b89d-4dce-a62f-130f6f158d89 +STEP: Creating secret with name s-test-opt-upd-028dcef6-8c05-40ae-b857-3b9b21c85991 +STEP: Creating the pod +Oct 27 14:35:16.807: INFO: The status of Pod pod-projected-secrets-0e0476ff-0eff-4cd2-a27e-c4d2aa1d2110 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:35:18.898: INFO: The status of Pod pod-projected-secrets-0e0476ff-0eff-4cd2-a27e-c4d2aa1d2110 is Running (Ready = true) +STEP: Deleting secret s-test-opt-del-52dadb25-b89d-4dce-a62f-130f6f158d89 +STEP: Updating secret s-test-opt-upd-028dcef6-8c05-40ae-b857-3b9b21c85991 +STEP: Creating secret with name s-test-opt-create-80142cbc-fa5f-423c-90d0-2fdd8380a3c3 +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:36:50.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-6624" for this suite. +•{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":142,"skipped":2347,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Probing container + should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:36:50.546: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-probe +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-8543 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 +[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating pod liveness-e12ee1ab-b88e-4689-8b9e-c385e0b2b525 in namespace container-probe-8543 +Oct 27 14:36:53.565: INFO: Started pod liveness-e12ee1ab-b88e-4689-8b9e-c385e0b2b525 in namespace container-probe-8543 +STEP: checking the pod's current state and verifying that restartCount is present +Oct 27 14:36:53.655: INFO: Initial restart count of pod liveness-e12ee1ab-b88e-4689-8b9e-c385e0b2b525 is 0 +Oct 27 14:37:12.566: INFO: Restart count of pod container-probe-8543/liveness-e12ee1ab-b88e-4689-8b9e-c385e0b2b525 is now 1 (18.911370356s elapsed) +STEP: deleting the pod +[AfterEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:37:12.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-8543" for this suite. +•{"msg":"PASSED [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":346,"completed":143,"skipped":2372,"failed":0} +SSSSSSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:37:12.932: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename statefulset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-2199 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:92 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:107 +STEP: Creating service test in namespace statefulset-2199 +[It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating stateful set ss in namespace statefulset-2199 +STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-2199 +Oct 27 14:37:13.935: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false +Oct 27 14:37:24.028: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod +Oct 27 14:37:24.119: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-2199 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Oct 27 14:37:25.183: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Oct 27 14:37:25.183: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Oct 27 14:37:25.183: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Oct 27 14:37:25.274: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true +Oct 27 14:37:35.365: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false +Oct 27 14:37:35.365: INFO: Waiting for statefulset status.replicas updated to 0 +Oct 27 14:37:35.727: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999619s +Oct 27 14:37:36.818: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.909256243s +Oct 27 14:37:37.910: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.817450365s +Oct 27 14:37:39.004: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.725421807s +Oct 27 14:37:40.097: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.63122017s +Oct 27 14:37:41.188: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.539626902s +Oct 27 14:37:42.279: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.44846411s +Oct 27 14:37:43.370: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.357405066s +Oct 27 14:37:44.461: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.266309252s +Oct 27 14:37:45.553: INFO: Verifying statefulset ss doesn't scale past 3 for another 174.389514ms +STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-2199 +Oct 27 14:37:46.645: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-2199 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 14:37:47.656: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Oct 27 14:37:47.656: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Oct 27 14:37:47.656: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Oct 27 14:37:47.656: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-2199 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 14:37:48.740: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" +Oct 27 14:37:48.740: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Oct 27 14:37:48.740: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Oct 27 14:37:48.740: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-2199 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 14:37:49.814: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" +Oct 27 14:37:49.814: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Oct 27 14:37:49.814: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Oct 27 14:37:49.905: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +Oct 27 14:37:49.905: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true +Oct 27 14:37:49.905: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true +STEP: Scale down will not halt with unhealthy stateful pod +Oct 27 14:37:49.995: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-2199 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Oct 27 14:37:51.020: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Oct 27 14:37:51.020: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Oct 27 14:37:51.020: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Oct 27 14:37:51.020: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-2199 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Oct 27 14:37:52.022: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Oct 27 14:37:52.022: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Oct 27 14:37:52.022: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Oct 27 14:37:52.023: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-2199 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Oct 27 14:37:53.072: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Oct 27 14:37:53.072: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Oct 27 14:37:53.072: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Oct 27 14:37:53.072: INFO: Waiting for statefulset status.replicas updated to 0 +Oct 27 14:37:53.163: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 +Oct 27 14:38:03.345: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false +Oct 27 14:38:03.345: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false +Oct 27 14:38:03.345: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false +Oct 27 14:38:03.618: INFO: POD NODE PHASE GRACE CONDITIONS +Oct 27 14:38:03.618: INFO: ss-0 ip-10-250-28-25.ec2.internal Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 14:37:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-27 14:37:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-27 14:37:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 14:37:13 +0000 UTC }] +Oct 27 14:38:03.618: INFO: ss-1 ip-10-250-28-25.ec2.internal Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 14:37:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-27 14:37:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-27 14:37:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 14:37:35 +0000 UTC }] +Oct 27 14:38:03.618: INFO: ss-2 ip-10-250-9-48.ec2.internal Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 14:37:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-27 14:37:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-27 14:37:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 14:37:35 +0000 UTC }] +Oct 27 14:38:03.618: INFO: +Oct 27 14:38:03.618: INFO: StatefulSet ss has not reached scale 0, at 3 +Oct 27 14:38:04.708: INFO: Verifying statefulset ss doesn't scale past 0 for another 8.909030352s +Oct 27 14:38:05.798: INFO: Verifying statefulset ss doesn't scale past 0 for another 7.818584055s +Oct 27 14:38:06.889: INFO: Verifying statefulset ss doesn't scale past 0 for another 6.728452268s +Oct 27 14:38:07.980: INFO: Verifying statefulset ss doesn't scale past 0 for another 5.637180736s +Oct 27 14:38:09.071: INFO: Verifying statefulset ss doesn't scale past 0 for another 4.546563396s +Oct 27 14:38:10.161: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.455879325s +Oct 27 14:38:11.252: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.365304957s +Oct 27 14:38:12.343: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.274327151s +Oct 27 14:38:13.433: INFO: Verifying statefulset ss doesn't scale past 0 for another 183.714924ms +STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-2199 +Oct 27 14:38:14.524: INFO: Scaling statefulset ss to 0 +Oct 27 14:38:14.796: INFO: Waiting for statefulset status.replicas updated to 0 +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118 +Oct 27 14:38:14.886: INFO: Deleting all statefulset in ns statefulset-2199 +Oct 27 14:38:14.977: INFO: Scaling statefulset ss to 0 +Oct 27 14:38:15.248: INFO: Waiting for statefulset status.replicas updated to 0 +Oct 27 14:38:15.339: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:38:15.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-2199" for this suite. +•{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":346,"completed":144,"skipped":2380,"failed":0} +SSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:38:15.882: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-3330 +STEP: Waiting for a default service account to be provisioned in namespace +[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir volume type on node default medium +Oct 27 14:38:16.714: INFO: Waiting up to 5m0s for pod "pod-224415f7-c0c6-4db0-94ef-70c379cd6ab3" in namespace "emptydir-3330" to be "Succeeded or Failed" +Oct 27 14:38:16.805: INFO: Pod "pod-224415f7-c0c6-4db0-94ef-70c379cd6ab3": Phase="Pending", Reason="", readiness=false. Elapsed: 90.458083ms +Oct 27 14:38:18.896: INFO: Pod "pod-224415f7-c0c6-4db0-94ef-70c379cd6ab3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.181625963s +STEP: Saw pod success +Oct 27 14:38:18.896: INFO: Pod "pod-224415f7-c0c6-4db0-94ef-70c379cd6ab3" satisfied condition "Succeeded or Failed" +Oct 27 14:38:18.986: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod pod-224415f7-c0c6-4db0-94ef-70c379cd6ab3 container test-container: +STEP: delete the pod +Oct 27 14:38:19.179: INFO: Waiting for pod pod-224415f7-c0c6-4db0-94ef-70c379cd6ab3 to disappear +Oct 27 14:38:19.269: INFO: Pod pod-224415f7-c0c6-4db0-94ef-70c379cd6ab3 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:38:19.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-3330" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":145,"skipped":2389,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-network] Proxy version v1 + should proxy through a service and a pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] version v1 + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:38:19.540: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename proxy +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in proxy-6929 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should proxy through a service and a pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: starting an echo server on multiple ports +STEP: creating replication controller proxy-service-phtc9 in namespace proxy-6929 +I1027 14:38:20.457650 5725 runners.go:190] Created replication controller with name: proxy-service-phtc9, namespace: proxy-6929, replica count: 1 +I1027 14:38:21.558969 5725 runners.go:190] proxy-service-phtc9 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I1027 14:38:22.559949 5725 runners.go:190] proxy-service-phtc9 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady +I1027 14:38:23.560568 5725 runners.go:190] proxy-service-phtc9 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Oct 27 14:38:23.651: INFO: setup took 3.380068457s, starting test cases +STEP: running 16 cases, 20 attempts per case, 320 total attempts +Oct 27 14:38:23.900: INFO: (0) /api/v1/namespaces/proxy-6929/pods/http:proxy-service-phtc9-4fhmj:160/proxy/: foo (200; 248.857807ms) +Oct 27 14:38:23.900: INFO: (0) /api/v1/namespaces/proxy-6929/services/http:proxy-service-phtc9:portname1/proxy/: foo (200; 248.931608ms) +Oct 27 14:38:23.900: INFO: (0) /api/v1/namespaces/proxy-6929/pods/proxy-service-phtc9-4fhmj:1080/proxy/: test<... (200; 248.951526ms) +Oct 27 14:38:23.906: INFO: (0) /api/v1/namespaces/proxy-6929/pods/http:proxy-service-phtc9-4fhmj:1080/proxy/: ... (200; 254.759308ms) +Oct 27 14:38:23.906: INFO: (0) /api/v1/namespaces/proxy-6929/services/https:proxy-service-phtc9:tlsportname1/proxy/: tls baz (200; 254.947896ms) +Oct 27 14:38:23.906: INFO: (0) /api/v1/namespaces/proxy-6929/pods/https:proxy-service-phtc9-4fhmj:462/proxy/: tls qux (200; 254.95578ms) +Oct 27 14:38:23.906: INFO: (0) /api/v1/namespaces/proxy-6929/pods/https:proxy-service-phtc9-4fhmj:460/proxy/: tls baz (200; 255.027708ms) +Oct 27 14:38:23.906: INFO: (0) /api/v1/namespaces/proxy-6929/services/proxy-service-phtc9:portname1/proxy/: foo (200; 254.920684ms) +Oct 27 14:38:23.906: INFO: (0) /api/v1/namespaces/proxy-6929/pods/proxy-service-phtc9-4fhmj:160/proxy/: foo (200; 254.871376ms) +Oct 27 14:38:23.906: INFO: (0) /api/v1/namespaces/proxy-6929/services/https:proxy-service-phtc9:tlsportname2/proxy/: tls qux (200; 254.907486ms) +Oct 27 14:38:23.910: INFO: (0) /api/v1/namespaces/proxy-6929/services/http:proxy-service-phtc9:portname2/proxy/: bar (200; 259.666825ms) +Oct 27 14:38:23.915: INFO: (0) /api/v1/namespaces/proxy-6929/pods/proxy-service-phtc9-4fhmj/proxy/: test (200; 263.889542ms) +Oct 27 14:38:23.915: INFO: (0) /api/v1/namespaces/proxy-6929/pods/proxy-service-phtc9-4fhmj:162/proxy/: bar (200; 263.895092ms) +Oct 27 14:38:23.915: INFO: (0) /api/v1/namespaces/proxy-6929/pods/http:proxy-service-phtc9-4fhmj:162/proxy/: bar (200; 263.892072ms) +Oct 27 14:38:23.919: INFO: (0) /api/v1/namespaces/proxy-6929/pods/https:proxy-service-phtc9-4fhmj:443/proxy/: test<... (200; 96.087857ms) +Oct 27 14:38:24.017: INFO: (1) /api/v1/namespaces/proxy-6929/pods/http:proxy-service-phtc9-4fhmj:162/proxy/: bar (200; 96.107454ms) +Oct 27 14:38:24.017: INFO: (1) /api/v1/namespaces/proxy-6929/services/https:proxy-service-phtc9:tlsportname2/proxy/: tls qux (200; 96.156062ms) +Oct 27 14:38:24.018: INFO: (1) /api/v1/namespaces/proxy-6929/pods/https:proxy-service-phtc9-4fhmj:462/proxy/: tls qux (200; 96.119768ms) +Oct 27 14:38:24.017: INFO: (1) /api/v1/namespaces/proxy-6929/pods/https:proxy-service-phtc9-4fhmj:460/proxy/: tls baz (200; 96.131816ms) +Oct 27 14:38:24.017: INFO: (1) /api/v1/namespaces/proxy-6929/pods/https:proxy-service-phtc9-4fhmj:443/proxy/: test (200; 96.136398ms) +Oct 27 14:38:24.017: INFO: (1) /api/v1/namespaces/proxy-6929/pods/proxy-service-phtc9-4fhmj:162/proxy/: bar (200; 96.197041ms) +Oct 27 14:38:24.017: INFO: (1) /api/v1/namespaces/proxy-6929/services/https:proxy-service-phtc9:tlsportname1/proxy/: tls baz (200; 96.085832ms) +Oct 27 14:38:24.020: INFO: (1) /api/v1/namespaces/proxy-6929/services/http:proxy-service-phtc9:portname2/proxy/: bar (200; 98.583823ms) +Oct 27 14:38:24.020: INFO: (1) /api/v1/namespaces/proxy-6929/services/proxy-service-phtc9:portname1/proxy/: foo (200; 98.754481ms) +Oct 27 14:38:24.020: INFO: (1) /api/v1/namespaces/proxy-6929/pods/http:proxy-service-phtc9-4fhmj:1080/proxy/: ... (200; 98.86574ms) +Oct 27 14:38:24.020: INFO: (1) /api/v1/namespaces/proxy-6929/services/proxy-service-phtc9:portname2/proxy/: bar (200; 98.624891ms) +Oct 27 14:38:24.022: INFO: (1) /api/v1/namespaces/proxy-6929/services/http:proxy-service-phtc9:portname1/proxy/: foo (200; 100.267165ms) +Oct 27 14:38:24.118: INFO: (2) /api/v1/namespaces/proxy-6929/services/https:proxy-service-phtc9:tlsportname2/proxy/: tls qux (200; 96.026473ms) +Oct 27 14:38:24.118: INFO: (2) /api/v1/namespaces/proxy-6929/pods/http:proxy-service-phtc9-4fhmj:162/proxy/: bar (200; 96.055142ms) +Oct 27 14:38:24.118: INFO: (2) /api/v1/namespaces/proxy-6929/pods/https:proxy-service-phtc9-4fhmj:460/proxy/: tls baz (200; 96.224681ms) +Oct 27 14:38:24.118: INFO: (2) /api/v1/namespaces/proxy-6929/pods/http:proxy-service-phtc9-4fhmj:160/proxy/: foo (200; 96.049954ms) +Oct 27 14:38:24.118: INFO: (2) /api/v1/namespaces/proxy-6929/pods/http:proxy-service-phtc9-4fhmj:1080/proxy/: ... (200; 96.12756ms) +Oct 27 14:38:24.118: INFO: (2) /api/v1/namespaces/proxy-6929/pods/proxy-service-phtc9-4fhmj:162/proxy/: bar (200; 96.162099ms) +Oct 27 14:38:24.118: INFO: (2) /api/v1/namespaces/proxy-6929/pods/https:proxy-service-phtc9-4fhmj:462/proxy/: tls qux (200; 96.132349ms) +Oct 27 14:38:24.118: INFO: (2) /api/v1/namespaces/proxy-6929/pods/https:proxy-service-phtc9-4fhmj:443/proxy/: test (200; 96.039123ms) +Oct 27 14:38:24.118: INFO: (2) /api/v1/namespaces/proxy-6929/services/https:proxy-service-phtc9:tlsportname1/proxy/: tls baz (200; 96.203016ms) +Oct 27 14:38:24.118: INFO: (2) /api/v1/namespaces/proxy-6929/pods/proxy-service-phtc9-4fhmj:1080/proxy/: test<... (200; 96.111212ms) +Oct 27 14:38:24.118: INFO: (2) /api/v1/namespaces/proxy-6929/pods/proxy-service-phtc9-4fhmj:160/proxy/: foo (200; 96.174282ms) +Oct 27 14:38:24.123: INFO: (2) /api/v1/namespaces/proxy-6929/services/http:proxy-service-phtc9:portname2/proxy/: bar (200; 100.810048ms) +Oct 27 14:38:24.123: INFO: (2) /api/v1/namespaces/proxy-6929/services/proxy-service-phtc9:portname1/proxy/: foo (200; 100.835342ms) +Oct 27 14:38:24.123: INFO: (2) /api/v1/namespaces/proxy-6929/services/http:proxy-service-phtc9:portname1/proxy/: foo (200; 100.927381ms) +Oct 27 14:38:24.123: INFO: (2) /api/v1/namespaces/proxy-6929/services/proxy-service-phtc9:portname2/proxy/: bar (200; 100.852585ms) +Oct 27 14:38:24.219: INFO: (3) /api/v1/namespaces/proxy-6929/services/http:proxy-service-phtc9:portname1/proxy/: foo (200; 96.228385ms) +Oct 27 14:38:24.219: INFO: (3) /api/v1/namespaces/proxy-6929/pods/https:proxy-service-phtc9-4fhmj:462/proxy/: tls qux (200; 96.329207ms) +Oct 27 14:38:24.219: INFO: (3) /api/v1/namespaces/proxy-6929/pods/http:proxy-service-phtc9-4fhmj:1080/proxy/: ... (200; 96.296665ms) +Oct 27 14:38:24.219: INFO: (3) /api/v1/namespaces/proxy-6929/pods/proxy-service-phtc9-4fhmj/proxy/: test (200; 96.234297ms) +Oct 27 14:38:24.219: INFO: (3) /api/v1/namespaces/proxy-6929/pods/proxy-service-phtc9-4fhmj:160/proxy/: foo (200; 96.309158ms) +Oct 27 14:38:24.219: INFO: (3) /api/v1/namespaces/proxy-6929/pods/proxy-service-phtc9-4fhmj:1080/proxy/: test<... (200; 96.333048ms) +Oct 27 14:38:24.219: INFO: (3) /api/v1/namespaces/proxy-6929/services/proxy-service-phtc9:portname2/proxy/: bar (200; 96.347963ms) +Oct 27 14:38:24.219: INFO: (3) /api/v1/namespaces/proxy-6929/services/https:proxy-service-phtc9:tlsportname2/proxy/: tls qux (200; 96.340569ms) +Oct 27 14:38:24.219: INFO: (3) /api/v1/namespaces/proxy-6929/pods/https:proxy-service-phtc9-4fhmj:460/proxy/: tls baz (200; 96.289543ms) +Oct 27 14:38:24.219: INFO: (3) /api/v1/namespaces/proxy-6929/pods/https:proxy-service-phtc9-4fhmj:443/proxy/: test (200; 96.643574ms) +Oct 27 14:38:24.320: INFO: (4) /api/v1/namespaces/proxy-6929/pods/http:proxy-service-phtc9-4fhmj:1080/proxy/: ... (200; 96.548147ms) +Oct 27 14:38:24.320: INFO: (4) /api/v1/namespaces/proxy-6929/pods/proxy-service-phtc9-4fhmj:160/proxy/: foo (200; 96.6919ms) +Oct 27 14:38:24.320: INFO: (4) /api/v1/namespaces/proxy-6929/pods/https:proxy-service-phtc9-4fhmj:462/proxy/: tls qux (200; 96.612293ms) +Oct 27 14:38:24.320: INFO: (4) /api/v1/namespaces/proxy-6929/pods/proxy-service-phtc9-4fhmj:1080/proxy/: test<... (200; 96.68199ms) +Oct 27 14:38:24.320: INFO: (4) /api/v1/namespaces/proxy-6929/pods/https:proxy-service-phtc9-4fhmj:443/proxy/: test (200; 96.521403ms) +Oct 27 14:38:24.422: INFO: (5) /api/v1/namespaces/proxy-6929/pods/proxy-service-phtc9-4fhmj:162/proxy/: bar (200; 96.499129ms) +Oct 27 14:38:24.422: INFO: (5) /api/v1/namespaces/proxy-6929/pods/http:proxy-service-phtc9-4fhmj:1080/proxy/: ... (200; 96.628893ms) +Oct 27 14:38:24.422: INFO: (5) /api/v1/namespaces/proxy-6929/pods/https:proxy-service-phtc9-4fhmj:460/proxy/: tls baz (200; 96.588883ms) +Oct 27 14:38:24.422: INFO: (5) /api/v1/namespaces/proxy-6929/pods/proxy-service-phtc9-4fhmj:160/proxy/: foo (200; 96.720174ms) +Oct 27 14:38:24.422: INFO: (5) /api/v1/namespaces/proxy-6929/pods/proxy-service-phtc9-4fhmj:1080/proxy/: test<... (200; 96.609035ms) +Oct 27 14:38:24.422: INFO: (5) /api/v1/namespaces/proxy-6929/pods/https:proxy-service-phtc9-4fhmj:462/proxy/: tls qux (200; 96.608972ms) +Oct 27 14:38:24.422: INFO: (5) /api/v1/namespaces/proxy-6929/pods/https:proxy-service-phtc9-4fhmj:443/proxy/: ... (200; 95.825523ms) +Oct 27 14:38:24.523: INFO: (6) /api/v1/namespaces/proxy-6929/pods/proxy-service-phtc9-4fhmj:1080/proxy/: test<... (200; 95.773611ms) +Oct 27 14:38:24.523: INFO: (6) /api/v1/namespaces/proxy-6929/services/https:proxy-service-phtc9:tlsportname2/proxy/: tls qux (200; 95.76982ms) +Oct 27 14:38:24.523: INFO: (6) /api/v1/namespaces/proxy-6929/pods/http:proxy-service-phtc9-4fhmj:162/proxy/: bar (200; 95.982326ms) +Oct 27 14:38:24.523: INFO: (6) /api/v1/namespaces/proxy-6929/pods/http:proxy-service-phtc9-4fhmj:160/proxy/: foo (200; 96.010322ms) +Oct 27 14:38:24.523: INFO: (6) /api/v1/namespaces/proxy-6929/pods/https:proxy-service-phtc9-4fhmj:462/proxy/: tls qux (200; 95.909869ms) +Oct 27 14:38:24.523: INFO: (6) /api/v1/namespaces/proxy-6929/services/http:proxy-service-phtc9:portname1/proxy/: foo (200; 95.871926ms) +Oct 27 14:38:24.523: INFO: (6) /api/v1/namespaces/proxy-6929/pods/proxy-service-phtc9-4fhmj/proxy/: test (200; 95.823204ms) +Oct 27 14:38:24.525: INFO: (6) /api/v1/namespaces/proxy-6929/services/http:proxy-service-phtc9:portname2/proxy/: bar (200; 98.307371ms) +Oct 27 14:38:24.527: INFO: (6) /api/v1/namespaces/proxy-6929/services/proxy-service-phtc9:portname1/proxy/: foo (200; 99.887624ms) +Oct 27 14:38:24.527: INFO: (6) /api/v1/namespaces/proxy-6929/services/proxy-service-phtc9:portname2/proxy/: bar (200; 100.024625ms) +Oct 27 14:38:24.527: INFO: (6) /api/v1/namespaces/proxy-6929/pods/proxy-service-phtc9-4fhmj:160/proxy/: foo (200; 100.044269ms) +Oct 27 14:38:24.623: INFO: (7) /api/v1/namespaces/proxy-6929/services/proxy-service-phtc9:portname2/proxy/: bar (200; 95.926042ms) +Oct 27 14:38:24.623: INFO: (7) /api/v1/namespaces/proxy-6929/pods/proxy-service-phtc9-4fhmj/proxy/: test (200; 95.915029ms) +Oct 27 14:38:24.623: INFO: (7) /api/v1/namespaces/proxy-6929/services/https:proxy-service-phtc9:tlsportname2/proxy/: tls qux (200; 95.970368ms) +Oct 27 14:38:24.623: INFO: (7) /api/v1/namespaces/proxy-6929/pods/https:proxy-service-phtc9-4fhmj:460/proxy/: tls baz (200; 96.039827ms) +Oct 27 14:38:24.623: INFO: (7) /api/v1/namespaces/proxy-6929/pods/http:proxy-service-phtc9-4fhmj:1080/proxy/: ... (200; 95.880351ms) +Oct 27 14:38:24.623: INFO: (7) /api/v1/namespaces/proxy-6929/pods/proxy-service-phtc9-4fhmj:1080/proxy/: test<... (200; 95.832504ms) +Oct 27 14:38:24.623: INFO: (7) /api/v1/namespaces/proxy-6929/services/http:proxy-service-phtc9:portname1/proxy/: foo (200; 95.858496ms) +Oct 27 14:38:24.623: INFO: (7) /api/v1/namespaces/proxy-6929/pods/https:proxy-service-phtc9-4fhmj:462/proxy/: tls qux (200; 95.82046ms) +Oct 27 14:38:24.623: INFO: (7) /api/v1/namespaces/proxy-6929/pods/https:proxy-service-phtc9-4fhmj:443/proxy/: test<... (200; 94.647848ms) +Oct 27 14:38:24.722: INFO: (8) /api/v1/namespaces/proxy-6929/pods/proxy-service-phtc9-4fhmj/proxy/: test (200; 94.610938ms) +Oct 27 14:38:24.724: INFO: (8) /api/v1/namespaces/proxy-6929/pods/http:proxy-service-phtc9-4fhmj:1080/proxy/: ... (200; 96.355178ms) +Oct 27 14:38:24.724: INFO: (8) /api/v1/namespaces/proxy-6929/services/http:proxy-service-phtc9:portname1/proxy/: foo (200; 96.499444ms) +Oct 27 14:38:24.724: INFO: (8) /api/v1/namespaces/proxy-6929/services/https:proxy-service-phtc9:tlsportname1/proxy/: tls baz (200; 96.417088ms) +Oct 27 14:38:24.724: INFO: (8) /api/v1/namespaces/proxy-6929/pods/http:proxy-service-phtc9-4fhmj:160/proxy/: foo (200; 96.384173ms) +Oct 27 14:38:24.724: INFO: (8) /api/v1/namespaces/proxy-6929/pods/http:proxy-service-phtc9-4fhmj:162/proxy/: bar (200; 96.410369ms) +Oct 27 14:38:24.724: INFO: (8) /api/v1/namespaces/proxy-6929/pods/proxy-service-phtc9-4fhmj:162/proxy/: bar (200; 96.35252ms) +Oct 27 14:38:24.724: INFO: (8) /api/v1/namespaces/proxy-6929/pods/https:proxy-service-phtc9-4fhmj:460/proxy/: tls baz (200; 96.502966ms) +Oct 27 14:38:24.725: INFO: (8) /api/v1/namespaces/proxy-6929/pods/https:proxy-service-phtc9-4fhmj:443/proxy/: ... (200; 96.095206ms) +Oct 27 14:38:24.825: INFO: (9) /api/v1/namespaces/proxy-6929/pods/http:proxy-service-phtc9-4fhmj:162/proxy/: bar (200; 96.031376ms) +Oct 27 14:38:24.825: INFO: (9) /api/v1/namespaces/proxy-6929/services/https:proxy-service-phtc9:tlsportname2/proxy/: tls qux (200; 96.151795ms) +Oct 27 14:38:24.825: INFO: (9) /api/v1/namespaces/proxy-6929/services/https:proxy-service-phtc9:tlsportname1/proxy/: tls baz (200; 96.103184ms) +Oct 27 14:38:24.825: INFO: (9) /api/v1/namespaces/proxy-6929/services/http:proxy-service-phtc9:portname2/proxy/: bar (200; 96.194467ms) +Oct 27 14:38:24.825: INFO: (9) /api/v1/namespaces/proxy-6929/pods/https:proxy-service-phtc9-4fhmj:460/proxy/: tls baz (200; 96.211908ms) +Oct 27 14:38:24.825: INFO: (9) /api/v1/namespaces/proxy-6929/pods/proxy-service-phtc9-4fhmj/proxy/: test (200; 96.189838ms) +Oct 27 14:38:24.825: INFO: (9) /api/v1/namespaces/proxy-6929/pods/https:proxy-service-phtc9-4fhmj:443/proxy/: test<... (200; 96.167736ms) +Oct 27 14:38:24.825: INFO: (9) /api/v1/namespaces/proxy-6929/pods/https:proxy-service-phtc9-4fhmj:462/proxy/: tls qux (200; 96.298105ms) +Oct 27 14:38:24.830: INFO: (9) /api/v1/namespaces/proxy-6929/pods/http:proxy-service-phtc9-4fhmj:160/proxy/: foo (200; 100.941945ms) +Oct 27 14:38:24.830: INFO: (9) /api/v1/namespaces/proxy-6929/pods/proxy-service-phtc9-4fhmj:160/proxy/: foo (200; 100.977583ms) +Oct 27 14:38:24.830: INFO: (9) /api/v1/namespaces/proxy-6929/services/proxy-service-phtc9:portname2/proxy/: bar (200; 100.859442ms) +Oct 27 14:38:24.830: INFO: (9) /api/v1/namespaces/proxy-6929/pods/proxy-service-phtc9-4fhmj:162/proxy/: bar (200; 100.907585ms) +Oct 27 14:38:24.928: INFO: (10) /api/v1/namespaces/proxy-6929/pods/https:proxy-service-phtc9-4fhmj:460/proxy/: tls baz (200; 97.325729ms) +Oct 27 14:38:24.928: INFO: (10) /api/v1/namespaces/proxy-6929/pods/http:proxy-service-phtc9-4fhmj:160/proxy/: foo (200; 97.47758ms) +Oct 27 14:38:24.928: INFO: (10) /api/v1/namespaces/proxy-6929/pods/proxy-service-phtc9-4fhmj:160/proxy/: foo (200; 97.451199ms) +Oct 27 14:38:24.928: INFO: (10) /api/v1/namespaces/proxy-6929/services/https:proxy-service-phtc9:tlsportname1/proxy/: tls baz (200; 97.469114ms) +Oct 27 14:38:24.928: INFO: (10) /api/v1/namespaces/proxy-6929/pods/proxy-service-phtc9-4fhmj:1080/proxy/: test<... (200; 97.433296ms) +Oct 27 14:38:24.928: INFO: (10) /api/v1/namespaces/proxy-6929/pods/http:proxy-service-phtc9-4fhmj:162/proxy/: bar (200; 97.471725ms) +Oct 27 14:38:24.928: INFO: (10) /api/v1/namespaces/proxy-6929/pods/http:proxy-service-phtc9-4fhmj:1080/proxy/: ... (200; 97.449723ms) +Oct 27 14:38:24.928: INFO: (10) /api/v1/namespaces/proxy-6929/pods/https:proxy-service-phtc9-4fhmj:443/proxy/: test (200; 97.477273ms) +Oct 27 14:38:24.930: INFO: (10) /api/v1/namespaces/proxy-6929/services/https:proxy-service-phtc9:tlsportname2/proxy/: tls qux (200; 99.636822ms) +Oct 27 14:38:24.932: INFO: (10) /api/v1/namespaces/proxy-6929/services/proxy-service-phtc9:portname1/proxy/: foo (200; 101.675138ms) +Oct 27 14:38:24.932: INFO: (10) /api/v1/namespaces/proxy-6929/services/http:proxy-service-phtc9:portname1/proxy/: foo (200; 101.526025ms) +Oct 27 14:38:24.932: INFO: (10) /api/v1/namespaces/proxy-6929/services/proxy-service-phtc9:portname2/proxy/: bar (200; 101.677462ms) +Oct 27 14:38:24.932: INFO: (10) /api/v1/namespaces/proxy-6929/services/http:proxy-service-phtc9:portname2/proxy/: bar (200; 101.645351ms) +Oct 27 14:38:25.029: INFO: (11) /api/v1/namespaces/proxy-6929/pods/proxy-service-phtc9-4fhmj:1080/proxy/: test<... (200; 96.976902ms) +Oct 27 14:38:25.029: INFO: (11) /api/v1/namespaces/proxy-6929/pods/http:proxy-service-phtc9-4fhmj:162/proxy/: bar (200; 97.081918ms) +Oct 27 14:38:25.029: INFO: (11) /api/v1/namespaces/proxy-6929/pods/https:proxy-service-phtc9-4fhmj:460/proxy/: tls baz (200; 97.173846ms) +Oct 27 14:38:25.029: INFO: (11) /api/v1/namespaces/proxy-6929/pods/proxy-service-phtc9-4fhmj:160/proxy/: foo (200; 97.042718ms) +Oct 27 14:38:25.031: INFO: (11) /api/v1/namespaces/proxy-6929/services/https:proxy-service-phtc9:tlsportname2/proxy/: tls qux (200; 99.358404ms) +Oct 27 14:38:25.032: INFO: (11) /api/v1/namespaces/proxy-6929/services/http:proxy-service-phtc9:portname1/proxy/: foo (200; 99.368702ms) +Oct 27 14:38:25.032: INFO: (11) /api/v1/namespaces/proxy-6929/pods/proxy-service-phtc9-4fhmj:162/proxy/: bar (200; 99.346343ms) +Oct 27 14:38:25.032: INFO: (11) /api/v1/namespaces/proxy-6929/services/https:proxy-service-phtc9:tlsportname1/proxy/: tls baz (200; 99.351142ms) +Oct 27 14:38:25.032: INFO: (11) /api/v1/namespaces/proxy-6929/pods/http:proxy-service-phtc9-4fhmj:1080/proxy/: ... (200; 99.349596ms) +Oct 27 14:38:25.032: INFO: (11) /api/v1/namespaces/proxy-6929/pods/https:proxy-service-phtc9-4fhmj:443/proxy/: test (200; 99.415066ms) +Oct 27 14:38:25.033: INFO: (11) /api/v1/namespaces/proxy-6929/services/proxy-service-phtc9:portname2/proxy/: bar (200; 100.771205ms) +Oct 27 14:38:25.033: INFO: (11) /api/v1/namespaces/proxy-6929/pods/http:proxy-service-phtc9-4fhmj:160/proxy/: foo (200; 100.835442ms) +Oct 27 14:38:25.035: INFO: (11) /api/v1/namespaces/proxy-6929/services/http:proxy-service-phtc9:portname2/proxy/: bar (200; 102.843598ms) +Oct 27 14:38:25.035: INFO: (11) /api/v1/namespaces/proxy-6929/services/proxy-service-phtc9:portname1/proxy/: foo (200; 102.870361ms) +Oct 27 14:38:25.131: INFO: (12) /api/v1/namespaces/proxy-6929/pods/http:proxy-service-phtc9-4fhmj:162/proxy/: bar (200; 96.087129ms) +Oct 27 14:38:25.131: INFO: (12) /api/v1/namespaces/proxy-6929/pods/proxy-service-phtc9-4fhmj:162/proxy/: bar (200; 96.152209ms) +Oct 27 14:38:25.131: INFO: (12) /api/v1/namespaces/proxy-6929/pods/http:proxy-service-phtc9-4fhmj:160/proxy/: foo (200; 96.2163ms) +Oct 27 14:38:25.131: INFO: (12) /api/v1/namespaces/proxy-6929/pods/proxy-service-phtc9-4fhmj/proxy/: test (200; 96.269103ms) +Oct 27 14:38:25.131: INFO: (12) /api/v1/namespaces/proxy-6929/pods/proxy-service-phtc9-4fhmj:160/proxy/: foo (200; 96.192431ms) +Oct 27 14:38:25.131: INFO: (12) /api/v1/namespaces/proxy-6929/pods/http:proxy-service-phtc9-4fhmj:1080/proxy/: ... (200; 96.367351ms) +Oct 27 14:38:25.132: INFO: (12) /api/v1/namespaces/proxy-6929/pods/https:proxy-service-phtc9-4fhmj:462/proxy/: tls qux (200; 96.487834ms) +Oct 27 14:38:25.132: INFO: (12) /api/v1/namespaces/proxy-6929/pods/https:proxy-service-phtc9-4fhmj:443/proxy/: test<... (200; 96.400933ms) +Oct 27 14:38:25.136: INFO: (12) /api/v1/namespaces/proxy-6929/services/proxy-service-phtc9:portname2/proxy/: bar (200; 101.289363ms) +Oct 27 14:38:25.136: INFO: (12) /api/v1/namespaces/proxy-6929/services/http:proxy-service-phtc9:portname1/proxy/: foo (200; 101.434928ms) +Oct 27 14:38:25.136: INFO: (12) /api/v1/namespaces/proxy-6929/services/http:proxy-service-phtc9:portname2/proxy/: bar (200; 101.326984ms) +Oct 27 14:38:25.136: INFO: (12) /api/v1/namespaces/proxy-6929/services/proxy-service-phtc9:portname1/proxy/: foo (200; 101.301947ms) +Oct 27 14:38:25.234: INFO: (13) /api/v1/namespaces/proxy-6929/pods/http:proxy-service-phtc9-4fhmj:1080/proxy/: ... (200; 96.795121ms) +Oct 27 14:38:25.234: INFO: (13) /api/v1/namespaces/proxy-6929/pods/proxy-service-phtc9-4fhmj:162/proxy/: bar (200; 96.763039ms) +Oct 27 14:38:25.234: INFO: (13) /api/v1/namespaces/proxy-6929/pods/https:proxy-service-phtc9-4fhmj:462/proxy/: tls qux (200; 96.792487ms) +Oct 27 14:38:25.234: INFO: (13) /api/v1/namespaces/proxy-6929/services/https:proxy-service-phtc9:tlsportname2/proxy/: tls qux (200; 96.935081ms) +Oct 27 14:38:25.234: INFO: (13) /api/v1/namespaces/proxy-6929/pods/https:proxy-service-phtc9-4fhmj:443/proxy/: test (200; 96.905369ms) +Oct 27 14:38:25.234: INFO: (13) /api/v1/namespaces/proxy-6929/pods/proxy-service-phtc9-4fhmj:160/proxy/: foo (200; 96.83848ms) +Oct 27 14:38:25.234: INFO: (13) /api/v1/namespaces/proxy-6929/services/https:proxy-service-phtc9:tlsportname1/proxy/: tls baz (200; 96.908093ms) +Oct 27 14:38:25.234: INFO: (13) /api/v1/namespaces/proxy-6929/pods/http:proxy-service-phtc9-4fhmj:160/proxy/: foo (200; 96.922726ms) +Oct 27 14:38:25.234: INFO: (13) /api/v1/namespaces/proxy-6929/pods/proxy-service-phtc9-4fhmj:1080/proxy/: test<... (200; 96.887879ms) +Oct 27 14:38:25.238: INFO: (13) /api/v1/namespaces/proxy-6929/services/proxy-service-phtc9:portname1/proxy/: foo (200; 101.592786ms) +Oct 27 14:38:25.238: INFO: (13) /api/v1/namespaces/proxy-6929/services/proxy-service-phtc9:portname2/proxy/: bar (200; 101.765507ms) +Oct 27 14:38:25.238: INFO: (13) /api/v1/namespaces/proxy-6929/services/http:proxy-service-phtc9:portname2/proxy/: bar (200; 101.570318ms) +Oct 27 14:38:25.238: INFO: (13) /api/v1/namespaces/proxy-6929/services/http:proxy-service-phtc9:portname1/proxy/: foo (200; 101.6882ms) +Oct 27 14:38:25.333: INFO: (14) /api/v1/namespaces/proxy-6929/services/http:proxy-service-phtc9:portname1/proxy/: foo (200; 94.710665ms) +Oct 27 14:38:25.333: INFO: (14) /api/v1/namespaces/proxy-6929/pods/proxy-service-phtc9-4fhmj:1080/proxy/: test<... (200; 94.749603ms) +Oct 27 14:38:25.333: INFO: (14) /api/v1/namespaces/proxy-6929/pods/https:proxy-service-phtc9-4fhmj:462/proxy/: tls qux (200; 94.724595ms) +Oct 27 14:38:25.333: INFO: (14) /api/v1/namespaces/proxy-6929/services/proxy-service-phtc9:portname1/proxy/: foo (200; 94.677738ms) +Oct 27 14:38:25.333: INFO: (14) /api/v1/namespaces/proxy-6929/pods/http:proxy-service-phtc9-4fhmj:1080/proxy/: ... (200; 94.726489ms) +Oct 27 14:38:25.333: INFO: (14) /api/v1/namespaces/proxy-6929/services/proxy-service-phtc9:portname2/proxy/: bar (200; 94.89059ms) +Oct 27 14:38:25.335: INFO: (14) /api/v1/namespaces/proxy-6929/pods/https:proxy-service-phtc9-4fhmj:460/proxy/: tls baz (200; 96.214813ms) +Oct 27 14:38:25.335: INFO: (14) /api/v1/namespaces/proxy-6929/pods/https:proxy-service-phtc9-4fhmj:443/proxy/: test (200; 96.271955ms) +Oct 27 14:38:25.337: INFO: (14) /api/v1/namespaces/proxy-6929/services/http:proxy-service-phtc9:portname2/proxy/: bar (200; 98.248205ms) +Oct 27 14:38:25.337: INFO: (14) /api/v1/namespaces/proxy-6929/pods/http:proxy-service-phtc9-4fhmj:160/proxy/: foo (200; 98.441093ms) +Oct 27 14:38:25.337: INFO: (14) /api/v1/namespaces/proxy-6929/pods/proxy-service-phtc9-4fhmj:160/proxy/: foo (200; 98.281999ms) +Oct 27 14:38:25.339: INFO: (14) /api/v1/namespaces/proxy-6929/pods/http:proxy-service-phtc9-4fhmj:162/proxy/: bar (200; 100.458144ms) +Oct 27 14:38:25.435: INFO: (15) /api/v1/namespaces/proxy-6929/pods/https:proxy-service-phtc9-4fhmj:462/proxy/: tls qux (200; 96.27727ms) +Oct 27 14:38:25.435: INFO: (15) /api/v1/namespaces/proxy-6929/pods/http:proxy-service-phtc9-4fhmj:162/proxy/: bar (200; 96.257112ms) +Oct 27 14:38:25.435: INFO: (15) /api/v1/namespaces/proxy-6929/services/proxy-service-phtc9:portname1/proxy/: foo (200; 96.228481ms) +Oct 27 14:38:25.435: INFO: (15) /api/v1/namespaces/proxy-6929/pods/https:proxy-service-phtc9-4fhmj:443/proxy/: test<... (200; 96.124086ms) +Oct 27 14:38:25.435: INFO: (15) /api/v1/namespaces/proxy-6929/services/https:proxy-service-phtc9:tlsportname2/proxy/: tls qux (200; 96.118622ms) +Oct 27 14:38:25.435: INFO: (15) /api/v1/namespaces/proxy-6929/pods/proxy-service-phtc9-4fhmj/proxy/: test (200; 96.170632ms) +Oct 27 14:38:25.435: INFO: (15) /api/v1/namespaces/proxy-6929/pods/http:proxy-service-phtc9-4fhmj:160/proxy/: foo (200; 96.284946ms) +Oct 27 14:38:25.435: INFO: (15) /api/v1/namespaces/proxy-6929/pods/http:proxy-service-phtc9-4fhmj:1080/proxy/: ... (200; 96.191847ms) +Oct 27 14:38:25.436: INFO: (15) /api/v1/namespaces/proxy-6929/pods/proxy-service-phtc9-4fhmj:162/proxy/: bar (200; 96.186591ms) +Oct 27 14:38:25.436: INFO: (15) /api/v1/namespaces/proxy-6929/services/https:proxy-service-phtc9:tlsportname1/proxy/: tls baz (200; 96.229316ms) +Oct 27 14:38:25.440: INFO: (15) /api/v1/namespaces/proxy-6929/services/http:proxy-service-phtc9:portname2/proxy/: bar (200; 100.748383ms) +Oct 27 14:38:25.440: INFO: (15) /api/v1/namespaces/proxy-6929/services/proxy-service-phtc9:portname2/proxy/: bar (200; 100.791788ms) +Oct 27 14:38:25.440: INFO: (15) /api/v1/namespaces/proxy-6929/services/http:proxy-service-phtc9:portname1/proxy/: foo (200; 100.770823ms) +Oct 27 14:38:25.440: INFO: (15) /api/v1/namespaces/proxy-6929/pods/proxy-service-phtc9-4fhmj:160/proxy/: foo (200; 100.788732ms) +Oct 27 14:38:25.540: INFO: (16) /api/v1/namespaces/proxy-6929/pods/proxy-service-phtc9-4fhmj:162/proxy/: bar (200; 100.018643ms) +Oct 27 14:38:25.540: INFO: (16) /api/v1/namespaces/proxy-6929/pods/http:proxy-service-phtc9-4fhmj:160/proxy/: foo (200; 100.074125ms) +Oct 27 14:38:25.540: INFO: (16) /api/v1/namespaces/proxy-6929/pods/proxy-service-phtc9-4fhmj:160/proxy/: foo (200; 100.04231ms) +Oct 27 14:38:25.541: INFO: (16) /api/v1/namespaces/proxy-6929/pods/proxy-service-phtc9-4fhmj:1080/proxy/: test<... (200; 100.446485ms) +Oct 27 14:38:25.541: INFO: (16) /api/v1/namespaces/proxy-6929/services/https:proxy-service-phtc9:tlsportname1/proxy/: tls baz (200; 100.412161ms) +Oct 27 14:38:25.541: INFO: (16) /api/v1/namespaces/proxy-6929/pods/https:proxy-service-phtc9-4fhmj:460/proxy/: tls baz (200; 100.499455ms) +Oct 27 14:38:25.542: INFO: (16) /api/v1/namespaces/proxy-6929/pods/http:proxy-service-phtc9-4fhmj:162/proxy/: bar (200; 102.18361ms) +Oct 27 14:38:25.542: INFO: (16) /api/v1/namespaces/proxy-6929/pods/proxy-service-phtc9-4fhmj/proxy/: test (200; 102.19503ms) +Oct 27 14:38:25.542: INFO: (16) /api/v1/namespaces/proxy-6929/pods/https:proxy-service-phtc9-4fhmj:443/proxy/: ... (200; 102.191449ms) +Oct 27 14:38:25.542: INFO: (16) /api/v1/namespaces/proxy-6929/pods/https:proxy-service-phtc9-4fhmj:462/proxy/: tls qux (200; 102.357655ms) +Oct 27 14:38:25.545: INFO: (16) /api/v1/namespaces/proxy-6929/services/http:proxy-service-phtc9:portname1/proxy/: foo (200; 104.580543ms) +Oct 27 14:38:25.545: INFO: (16) /api/v1/namespaces/proxy-6929/services/proxy-service-phtc9:portname1/proxy/: foo (200; 104.618687ms) +Oct 27 14:38:25.545: INFO: (16) /api/v1/namespaces/proxy-6929/services/proxy-service-phtc9:portname2/proxy/: bar (200; 104.493214ms) +Oct 27 14:38:25.548: INFO: (16) /api/v1/namespaces/proxy-6929/services/http:proxy-service-phtc9:portname2/proxy/: bar (200; 108.054166ms) +Oct 27 14:38:25.645: INFO: (17) /api/v1/namespaces/proxy-6929/services/https:proxy-service-phtc9:tlsportname2/proxy/: tls qux (200; 96.174278ms) +Oct 27 14:38:25.645: INFO: (17) /api/v1/namespaces/proxy-6929/pods/proxy-service-phtc9-4fhmj:1080/proxy/: test<... (200; 96.091052ms) +Oct 27 14:38:25.645: INFO: (17) /api/v1/namespaces/proxy-6929/pods/proxy-service-phtc9-4fhmj/proxy/: test (200; 96.263384ms) +Oct 27 14:38:25.645: INFO: (17) /api/v1/namespaces/proxy-6929/pods/proxy-service-phtc9-4fhmj:162/proxy/: bar (200; 96.159552ms) +Oct 27 14:38:25.645: INFO: (17) /api/v1/namespaces/proxy-6929/services/https:proxy-service-phtc9:tlsportname1/proxy/: tls baz (200; 96.156055ms) +Oct 27 14:38:25.645: INFO: (17) /api/v1/namespaces/proxy-6929/pods/http:proxy-service-phtc9-4fhmj:1080/proxy/: ... (200; 96.633664ms) +Oct 27 14:38:25.645: INFO: (17) /api/v1/namespaces/proxy-6929/pods/https:proxy-service-phtc9-4fhmj:443/proxy/: test (200; 96.390295ms) +Oct 27 14:38:25.746: INFO: (18) /api/v1/namespaces/proxy-6929/pods/http:proxy-service-phtc9-4fhmj:162/proxy/: bar (200; 96.192724ms) +Oct 27 14:38:25.746: INFO: (18) /api/v1/namespaces/proxy-6929/pods/proxy-service-phtc9-4fhmj:1080/proxy/: test<... (200; 96.240595ms) +Oct 27 14:38:25.746: INFO: (18) /api/v1/namespaces/proxy-6929/pods/https:proxy-service-phtc9-4fhmj:462/proxy/: tls qux (200; 96.276301ms) +Oct 27 14:38:25.747: INFO: (18) /api/v1/namespaces/proxy-6929/pods/proxy-service-phtc9-4fhmj:162/proxy/: bar (200; 97.714541ms) +Oct 27 14:38:25.747: INFO: (18) /api/v1/namespaces/proxy-6929/pods/https:proxy-service-phtc9-4fhmj:460/proxy/: tls baz (200; 97.605888ms) +Oct 27 14:38:25.747: INFO: (18) /api/v1/namespaces/proxy-6929/pods/http:proxy-service-phtc9-4fhmj:1080/proxy/: ... (200; 97.670589ms) +Oct 27 14:38:25.747: INFO: (18) /api/v1/namespaces/proxy-6929/services/https:proxy-service-phtc9:tlsportname2/proxy/: tls qux (200; 97.648431ms) +Oct 27 14:38:25.747: INFO: (18) /api/v1/namespaces/proxy-6929/services/https:proxy-service-phtc9:tlsportname1/proxy/: tls baz (200; 97.798616ms) +Oct 27 14:38:25.749: INFO: (18) /api/v1/namespaces/proxy-6929/services/proxy-service-phtc9:portname1/proxy/: foo (200; 99.595115ms) +Oct 27 14:38:25.751: INFO: (18) /api/v1/namespaces/proxy-6929/services/http:proxy-service-phtc9:portname1/proxy/: foo (200; 101.768179ms) +Oct 27 14:38:25.751: INFO: (18) /api/v1/namespaces/proxy-6929/services/http:proxy-service-phtc9:portname2/proxy/: bar (200; 101.935073ms) +Oct 27 14:38:25.753: INFO: (18) /api/v1/namespaces/proxy-6929/services/proxy-service-phtc9:portname2/proxy/: bar (200; 103.998987ms) +Oct 27 14:38:25.850: INFO: (19) /api/v1/namespaces/proxy-6929/services/https:proxy-service-phtc9:tlsportname1/proxy/: tls baz (200; 96.043019ms) +Oct 27 14:38:25.850: INFO: (19) /api/v1/namespaces/proxy-6929/pods/proxy-service-phtc9-4fhmj:162/proxy/: bar (200; 96.13082ms) +Oct 27 14:38:25.850: INFO: (19) /api/v1/namespaces/proxy-6929/services/http:proxy-service-phtc9:portname1/proxy/: foo (200; 96.076703ms) +Oct 27 14:38:25.850: INFO: (19) /api/v1/namespaces/proxy-6929/pods/https:proxy-service-phtc9-4fhmj:443/proxy/: test<... (200; 95.971871ms) +Oct 27 14:38:25.850: INFO: (19) /api/v1/namespaces/proxy-6929/pods/https:proxy-service-phtc9-4fhmj:462/proxy/: tls qux (200; 96.097367ms) +Oct 27 14:38:25.850: INFO: (19) /api/v1/namespaces/proxy-6929/pods/http:proxy-service-phtc9-4fhmj:1080/proxy/: ... (200; 96.096606ms) +Oct 27 14:38:25.850: INFO: (19) /api/v1/namespaces/proxy-6929/services/proxy-service-phtc9:portname1/proxy/: foo (200; 96.04726ms) +Oct 27 14:38:25.850: INFO: (19) /api/v1/namespaces/proxy-6929/services/https:proxy-service-phtc9:tlsportname2/proxy/: tls qux (200; 96.015891ms) +Oct 27 14:38:25.850: INFO: (19) /api/v1/namespaces/proxy-6929/pods/https:proxy-service-phtc9-4fhmj:460/proxy/: tls baz (200; 96.073136ms) +Oct 27 14:38:25.850: INFO: (19) /api/v1/namespaces/proxy-6929/pods/http:proxy-service-phtc9-4fhmj:162/proxy/: bar (200; 96.249819ms) +Oct 27 14:38:25.850: INFO: (19) /api/v1/namespaces/proxy-6929/pods/proxy-service-phtc9-4fhmj/proxy/: test (200; 96.104204ms) +Oct 27 14:38:25.854: INFO: (19) /api/v1/namespaces/proxy-6929/pods/proxy-service-phtc9-4fhmj:160/proxy/: foo (200; 100.523537ms) +Oct 27 14:38:25.854: INFO: (19) /api/v1/namespaces/proxy-6929/services/http:proxy-service-phtc9:portname2/proxy/: bar (200; 100.584397ms) +Oct 27 14:38:25.854: INFO: (19) /api/v1/namespaces/proxy-6929/services/proxy-service-phtc9:portname2/proxy/: bar (200; 100.634496ms) +Oct 27 14:38:25.854: INFO: (19) /api/v1/namespaces/proxy-6929/pods/http:proxy-service-phtc9-4fhmj:160/proxy/: foo (200; 100.705808ms) +STEP: deleting ReplicationController proxy-service-phtc9 in namespace proxy-6929, will wait for the garbage collector to delete the pods +Oct 27 14:38:26.137: INFO: Deleting ReplicationController proxy-service-phtc9 took: 91.155717ms +Oct 27 14:38:26.238: INFO: Terminating ReplicationController proxy-service-phtc9 pods took: 101.265944ms +[AfterEach] version v1 + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:38:26.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "proxy-6929" for this suite. +•{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":346,"completed":146,"skipped":2402,"failed":0} +SSSSS +------------------------------ +[sig-node] Pods Extended Pods Set QOS Class + should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Pods Extended + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:38:26.922: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename pods +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-1016 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] Pods Set QOS Class + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:149 +[It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pod +STEP: submitting the pod to kubernetes +STEP: verifying QOS class is set on the pod +[AfterEach] [sig-node] Pods Extended + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:38:27.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-1016" for this suite. +•{"msg":"PASSED [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":346,"completed":147,"skipped":2407,"failed":0} +SSSSSSSSS +------------------------------ +[sig-node] Probing container + should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:38:28.021: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-probe +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-1382 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 +[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating pod busybox-928f1081-4a43-42a9-9375-1fcff1d330fe in namespace container-probe-1382 +Oct 27 14:38:33.092: INFO: Started pod busybox-928f1081-4a43-42a9-9375-1fcff1d330fe in namespace container-probe-1382 +STEP: checking the pod's current state and verifying that restartCount is present +Oct 27 14:38:33.182: INFO: Initial restart count of pod busybox-928f1081-4a43-42a9-9375-1fcff1d330fe is 0 +Oct 27 14:39:21.396: INFO: Restart count of pod container-probe-1382/busybox-928f1081-4a43-42a9-9375-1fcff1d330fe is now 1 (48.213977791s elapsed) +STEP: deleting the pod +[AfterEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:39:21.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-1382" for this suite. +•{"msg":"PASSED [sig-node] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":346,"completed":148,"skipped":2416,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide container's cpu limit [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:39:21.761: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-8592 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should provide container's cpu limit [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 27 14:39:22.589: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dba68853-04ad-436c-baaf-14f2e770d482" in namespace "projected-8592" to be "Succeeded or Failed" +Oct 27 14:39:22.701: INFO: Pod "downwardapi-volume-dba68853-04ad-436c-baaf-14f2e770d482": Phase="Pending", Reason="", readiness=false. Elapsed: 111.74694ms +Oct 27 14:39:24.792: INFO: Pod "downwardapi-volume-dba68853-04ad-436c-baaf-14f2e770d482": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.203239694s +STEP: Saw pod success +Oct 27 14:39:24.792: INFO: Pod "downwardapi-volume-dba68853-04ad-436c-baaf-14f2e770d482" satisfied condition "Succeeded or Failed" +Oct 27 14:39:24.883: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod downwardapi-volume-dba68853-04ad-436c-baaf-14f2e770d482 container client-container: +STEP: delete the pod +Oct 27 14:39:25.078: INFO: Waiting for pod downwardapi-volume-dba68853-04ad-436c-baaf-14f2e770d482 to disappear +Oct 27 14:39:25.168: INFO: Pod downwardapi-volume-dba68853-04ad-436c-baaf-14f2e770d482 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:39:25.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-8592" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":346,"completed":149,"skipped":2426,"failed":0} +SSSS +------------------------------ +[sig-apps] Deployment + deployment should support rollover [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:39:25.500: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename deployment +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in deployment-7037 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 +[It] deployment should support rollover [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:39:26.432: INFO: Pod name rollover-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running +Oct 27 14:39:28.614: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready +Oct 27 14:39:30.704: INFO: Creating deployment "test-rollover-deployment" +Oct 27 14:39:30.887: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations +Oct 27 14:39:30.977: INFO: Check revision of new replica set for deployment "test-rollover-deployment" +Oct 27 14:39:31.158: INFO: Ensure that both replica sets have 1 created replica +Oct 27 14:39:31.339: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update +Oct 27 14:39:31.520: INFO: Updating deployment test-rollover-deployment +Oct 27 14:39:31.520: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller +Oct 27 14:39:31.610: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 +Oct 27 14:39:31.791: INFO: Make sure deployment "test-rollover-deployment" is complete +Oct 27 14:39:31.972: INFO: all replica sets need to contain the pod-template-hash label +Oct 27 14:39:31.972: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942370, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942370, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942371, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942370, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 27 14:39:34.154: INFO: all replica sets need to contain the pod-template-hash label +Oct 27 14:39:34.154: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942370, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942370, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942372, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942370, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 27 14:39:36.154: INFO: all replica sets need to contain the pod-template-hash label +Oct 27 14:39:36.154: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942370, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942370, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942372, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942370, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 27 14:39:38.156: INFO: all replica sets need to contain the pod-template-hash label +Oct 27 14:39:38.156: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942370, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942370, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942372, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942370, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 27 14:39:40.154: INFO: all replica sets need to contain the pod-template-hash label +Oct 27 14:39:40.154: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942370, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942370, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942372, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942370, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 27 14:39:42.153: INFO: all replica sets need to contain the pod-template-hash label +Oct 27 14:39:42.153: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942370, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942370, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942372, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942370, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 27 14:39:44.154: INFO: +Oct 27 14:39:44.154: INFO: Ensure that both old replica sets have no replicas +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 +Oct 27 14:39:44.425: INFO: Deployment "test-rollover-deployment": +&Deployment{ObjectMeta:{test-rollover-deployment deployment-7037 791d336e-a3d1-424c-9ff6-62fcaa328f4d 20521 2 2021-10-27 14:39:30 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-10-27 14:39:31 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 14:39:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002d94628 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-10-27 14:39:30 +0000 UTC,LastTransitionTime:2021-10-27 14:39:30 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-98c5f4599" has successfully progressed.,LastUpdateTime:2021-10-27 14:39:42 +0000 UTC,LastTransitionTime:2021-10-27 14:39:30 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} + +Oct 27 14:39:44.516: INFO: New ReplicaSet "test-rollover-deployment-98c5f4599" of Deployment "test-rollover-deployment": +&ReplicaSet{ObjectMeta:{test-rollover-deployment-98c5f4599 deployment-7037 6ce994a5-f479-4645-843f-a54fa6dde9b9 20512 2 2021-10-27 14:39:31 +0000 UTC map[name:rollover-pod pod-template-hash:98c5f4599] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 791d336e-a3d1-424c-9ff6-62fcaa328f4d 0xc002d94c00 0xc002d94c01}] [] [{kube-controller-manager Update apps/v1 2021-10-27 14:39:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"791d336e-a3d1-424c-9ff6-62fcaa328f4d\"}":{}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 14:39:42 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 98c5f4599,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:98c5f4599] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002d94c98 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} +Oct 27 14:39:44.516: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": +Oct 27 14:39:44.516: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-7037 edb7938a-109c-4cf5-8e01-1c8e745e7fdd 20520 2 2021-10-27 14:39:26 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 791d336e-a3d1-424c-9ff6-62fcaa328f4d 0xc002d949b7 0xc002d949b8}] [] [{e2e.test Update apps/v1 2021-10-27 14:39:26 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 14:39:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"791d336e-a3d1-424c-9ff6-62fcaa328f4d\"}":{}}},"f:spec":{"f:replicas":{}}} } {kube-controller-manager Update apps/v1 2021-10-27 14:39:42 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002d94a78 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Oct 27 14:39:44.516: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-78bc8b888c deployment-7037 cad3a7ec-f71f-4418-81c1-409ea7ce75fa 20453 2 2021-10-27 14:39:30 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 791d336e-a3d1-424c-9ff6-62fcaa328f4d 0xc002d94ae7 0xc002d94ae8}] [] [{kube-controller-manager Update apps/v1 2021-10-27 14:39:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"791d336e-a3d1-424c-9ff6-62fcaa328f4d\"}":{}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 14:39:31 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78bc8b888c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002d94b98 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Oct 27 14:39:44.607: INFO: Pod "test-rollover-deployment-98c5f4599-xtfcw" is available: +&Pod{ObjectMeta:{test-rollover-deployment-98c5f4599-xtfcw test-rollover-deployment-98c5f4599- deployment-7037 f5845b7a-33c7-42eb-a5f1-c3ce8bea0c85 20468 0 2021-10-27 14:39:31 +0000 UTC map[name:rollover-pod pod-template-hash:98c5f4599] map[cni.projectcalico.org/containerID:e2e9a54d59066aa373976f2febf8c6720ffde322825f1f6d4de38610cf724ccb cni.projectcalico.org/podIP:100.96.1.177/32 cni.projectcalico.org/podIPs:100.96.1.177/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet test-rollover-deployment-98c5f4599 6ce994a5-f479-4645-843f-a54fa6dde9b9 0xc002d951e0 0xc002d951e1}] [] [{kube-controller-manager Update v1 2021-10-27 14:39:31 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ce994a5-f479-4645-843f-a54fa6dde9b9\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-27 14:39:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-27 14:39:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.1.177\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-phssv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tm94z-0j6.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-phssv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-28-25.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:39:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:39:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:39:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:39:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.28.25,PodIP:100.96.1.177,StartTime:2021-10-27 14:39:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-27 14:39:32 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:docker://743b9b541717fce33ea7a1c0954fc00eead7b1745d877cceaed96ffe10d73150,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.1.177,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:39:44.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-7037" for this suite. +•{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":346,"completed":150,"skipped":2430,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should provide secure master service [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:39:44.878: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-8338 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should provide secure master service [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:39:45.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-8338" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":346,"completed":151,"skipped":2446,"failed":0} + +------------------------------ +[sig-scheduling] SchedulerPreemption [Serial] + validates basic preemption works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:39:45.885: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename sched-preemption +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-preemption-8966 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 +Oct 27 14:39:46.889: INFO: Waiting up to 1m0s for all nodes to be ready +Oct 27 14:40:47.712: INFO: Waiting for terminating namespaces to be deleted... +[It] validates basic preemption works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Create pods that use 4/5 of node resources. +Oct 27 14:40:47.997: INFO: Created pod: pod0-0-sched-preemption-low-priority +Oct 27 14:40:48.092: INFO: Created pod: pod0-1-sched-preemption-medium-priority +Oct 27 14:40:48.283: INFO: Created pod: pod1-0-sched-preemption-medium-priority +Oct 27 14:40:48.378: INFO: Created pod: pod1-1-sched-preemption-medium-priority +STEP: Wait for pods to be scheduled. +STEP: Run a high priority pod that has same requirements as that of lower priority pod +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:40:57.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-preemption-8966" for this suite. +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 +•{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":346,"completed":152,"skipped":2446,"failed":0} +S +------------------------------ +[sig-node] Docker Containers + should be able to override the image's default command and arguments [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Docker Containers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:40:58.298: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename containers +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in containers-1460 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test override all +Oct 27 14:40:59.128: INFO: Waiting up to 5m0s for pod "client-containers-dda8e623-2dc8-4583-8201-0a45fb53081c" in namespace "containers-1460" to be "Succeeded or Failed" +Oct 27 14:40:59.218: INFO: Pod "client-containers-dda8e623-2dc8-4583-8201-0a45fb53081c": Phase="Pending", Reason="", readiness=false. Elapsed: 90.264853ms +Oct 27 14:41:01.309: INFO: Pod "client-containers-dda8e623-2dc8-4583-8201-0a45fb53081c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.181283096s +STEP: Saw pod success +Oct 27 14:41:01.309: INFO: Pod "client-containers-dda8e623-2dc8-4583-8201-0a45fb53081c" satisfied condition "Succeeded or Failed" +Oct 27 14:41:01.399: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod client-containers-dda8e623-2dc8-4583-8201-0a45fb53081c container agnhost-container: +STEP: delete the pod +Oct 27 14:41:01.630: INFO: Waiting for pod client-containers-dda8e623-2dc8-4583-8201-0a45fb53081c to disappear +Oct 27 14:41:01.720: INFO: Pod client-containers-dda8e623-2dc8-4583-8201-0a45fb53081c no longer exists +[AfterEach] [sig-node] Docker Containers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:41:01.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "containers-1460" for this suite. +•{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":346,"completed":153,"skipped":2447,"failed":0} +S +------------------------------ +[sig-apps] Daemon set [Serial] + should rollback without unnecessary restarts [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:41:01.992: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename daemonsets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in daemonsets-6857 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:142 +[It] should rollback without unnecessary restarts [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:41:03.177: INFO: Create a RollingUpdate DaemonSet +Oct 27 14:41:03.267: INFO: Check that daemon pods launch on every node of the cluster +Oct 27 14:41:03.492: INFO: Number of nodes with available pods: 0 +Oct 27 14:41:03.492: INFO: Node ip-10-250-28-25.ec2.internal is running more than one daemon pod +Oct 27 14:41:04.762: INFO: Number of nodes with available pods: 2 +Oct 27 14:41:04.762: INFO: Number of running nodes: 2, number of available pods: 2 +Oct 27 14:41:04.762: INFO: Update the DaemonSet to trigger a rollout +Oct 27 14:41:04.943: INFO: Updating DaemonSet daemon-set +Oct 27 14:41:08.394: INFO: Roll back the DaemonSet before rollout is complete +Oct 27 14:41:08.576: INFO: Updating DaemonSet daemon-set +Oct 27 14:41:08.576: INFO: Make sure DaemonSet rollback is complete +Oct 27 14:41:09.849: INFO: Pod daemon-set-fh7sw is not available +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:108 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6857, will wait for the garbage collector to delete the pods +Oct 27 14:41:10.492: INFO: Deleting DaemonSet.extensions daemon-set took: 90.982283ms +Oct 27 14:41:10.593: INFO: Terminating DaemonSet.extensions daemon-set pods took: 101.091762ms +Oct 27 14:41:13.083: INFO: Number of nodes with available pods: 0 +Oct 27 14:41:13.083: INFO: Number of running nodes: 0, number of available pods: 0 +Oct 27 14:41:13.174: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"21123"},"items":null} + +Oct 27 14:41:13.264: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"21123"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:41:13.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-6857" for this suite. +•{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":346,"completed":154,"skipped":2448,"failed":0} +SSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should update labels on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:41:13.719: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-9057 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should update labels on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating the pod +Oct 27 14:41:14.644: INFO: The status of Pod labelsupdate1fa8b84e-f1f1-4f5a-96a3-4686b0488f35 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:41:16.735: INFO: The status of Pod labelsupdate1fa8b84e-f1f1-4f5a-96a3-4686b0488f35 is Running (Ready = true) +Oct 27 14:41:17.646: INFO: Successfully updated pod "labelsupdate1fa8b84e-f1f1-4f5a-96a3-4686b0488f35" +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:41:19.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-9057" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":346,"completed":155,"skipped":2453,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a pod. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:41:20.139: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename resourcequota +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-6786 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should create a ResourceQuota and capture the life of a pod. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Counting existing ResourceQuota +STEP: Creating a ResourceQuota +STEP: Ensuring resource quota status is calculated +STEP: Creating a Pod that fits quota +STEP: Ensuring ResourceQuota status captures the pod usage +STEP: Not allowing a pod to be created that exceeds remaining quota +STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) +STEP: Ensuring a pod cannot update its resource requirements +STEP: Ensuring attempts to update pod resource requirements did not change quota usage +STEP: Deleting the pod +STEP: Ensuring resource quota status released the pod usage +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:41:34.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-6786" for this suite. +•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":346,"completed":156,"skipped":2516,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-node] Downward API + should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:41:35.246: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-1777 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward api env vars +Oct 27 14:41:36.074: INFO: Waiting up to 5m0s for pod "downward-api-34fa5bf4-4a3d-4387-bd2f-3d302a0c1059" in namespace "downward-api-1777" to be "Succeeded or Failed" +Oct 27 14:41:36.164: INFO: Pod "downward-api-34fa5bf4-4a3d-4387-bd2f-3d302a0c1059": Phase="Pending", Reason="", readiness=false. Elapsed: 90.226998ms +Oct 27 14:41:38.255: INFO: Pod "downward-api-34fa5bf4-4a3d-4387-bd2f-3d302a0c1059": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.181113442s +STEP: Saw pod success +Oct 27 14:41:38.255: INFO: Pod "downward-api-34fa5bf4-4a3d-4387-bd2f-3d302a0c1059" satisfied condition "Succeeded or Failed" +Oct 27 14:41:38.345: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod downward-api-34fa5bf4-4a3d-4387-bd2f-3d302a0c1059 container dapi-container: +STEP: delete the pod +Oct 27 14:41:38.537: INFO: Waiting for pod downward-api-34fa5bf4-4a3d-4387-bd2f-3d302a0c1059 to disappear +Oct 27 14:41:38.627: INFO: Pod downward-api-34fa5bf4-4a3d-4387-bd2f-3d302a0c1059 no longer exists +[AfterEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:41:38.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-1777" for this suite. +•{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":346,"completed":157,"skipped":2529,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should honor timeout [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:41:38.900: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-715 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 27 14:41:40.988: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942500, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942500, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942500, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942500, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 27 14:41:44.174: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should honor timeout [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Setting timeout (1s) shorter than webhook latency (5s) +STEP: Registering slow webhook via the AdmissionRegistration API +STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) +STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore +STEP: Registering slow webhook via the AdmissionRegistration API +STEP: Having no error when timeout is longer than webhook latency +STEP: Registering slow webhook via the AdmissionRegistration API +STEP: Having no error when timeout is empty (defaulted to 10s in v1) +STEP: Registering slow webhook via the AdmissionRegistration API +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:41:58.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-715" for this suite. +STEP: Destroying namespace "webhook-715-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":346,"completed":158,"skipped":2540,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Probing container + should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:41:58.816: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-probe +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-2759 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 +[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating pod busybox-ca3750b0-9b03-477b-82d1-3c2430115a8c in namespace container-probe-2759 +Oct 27 14:42:01.828: INFO: Started pod busybox-ca3750b0-9b03-477b-82d1-3c2430115a8c in namespace container-probe-2759 +STEP: checking the pod's current state and verifying that restartCount is present +Oct 27 14:42:01.918: INFO: Initial restart count of pod busybox-ca3750b0-9b03-477b-82d1-3c2430115a8c is 0 +STEP: deleting the pod +[AfterEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:46:02.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-2759" for this suite. +•{"msg":"PASSED [sig-node] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":346,"completed":159,"skipped":2563,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] DNS + should provide DNS for ExternalName services [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:46:02.973: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename dns +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-7598 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide DNS for ExternalName services [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a test externalName service +STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7598.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7598.svc.cluster.local; sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7598.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7598.svc.cluster.local; sleep 1; done + +STEP: creating a pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Oct 27 14:46:06.416: INFO: DNS probes using dns-test-3f1a4f13-051b-4c5e-ac69-cb7eb35270c7 succeeded + +STEP: deleting the pod +STEP: changing the externalName to bar.example.com +STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7598.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7598.svc.cluster.local; sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7598.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7598.svc.cluster.local; sleep 1; done + +STEP: creating a second pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Oct 27 14:46:14.205: INFO: File wheezy_udp@dns-test-service-3.dns-7598.svc.cluster.local from pod dns-7598/dns-test-164275d6-8912-4f34-b1f9-af98e452bec9 contains 'foo.example.com. +' instead of 'bar.example.com.' +Oct 27 14:46:14.300: INFO: File jessie_udp@dns-test-service-3.dns-7598.svc.cluster.local from pod dns-7598/dns-test-164275d6-8912-4f34-b1f9-af98e452bec9 contains 'foo.example.com. +' instead of 'bar.example.com.' +Oct 27 14:46:14.300: INFO: Lookups using dns-7598/dns-test-164275d6-8912-4f34-b1f9-af98e452bec9 failed for: [wheezy_udp@dns-test-service-3.dns-7598.svc.cluster.local jessie_udp@dns-test-service-3.dns-7598.svc.cluster.local] + +Oct 27 14:46:19.397: INFO: File wheezy_udp@dns-test-service-3.dns-7598.svc.cluster.local from pod dns-7598/dns-test-164275d6-8912-4f34-b1f9-af98e452bec9 contains 'foo.example.com. +' instead of 'bar.example.com.' +Oct 27 14:46:19.533: INFO: File jessie_udp@dns-test-service-3.dns-7598.svc.cluster.local from pod dns-7598/dns-test-164275d6-8912-4f34-b1f9-af98e452bec9 contains 'foo.example.com. +' instead of 'bar.example.com.' +Oct 27 14:46:19.533: INFO: Lookups using dns-7598/dns-test-164275d6-8912-4f34-b1f9-af98e452bec9 failed for: [wheezy_udp@dns-test-service-3.dns-7598.svc.cluster.local jessie_udp@dns-test-service-3.dns-7598.svc.cluster.local] + +Oct 27 14:46:24.395: INFO: File wheezy_udp@dns-test-service-3.dns-7598.svc.cluster.local from pod dns-7598/dns-test-164275d6-8912-4f34-b1f9-af98e452bec9 contains 'foo.example.com. +' instead of 'bar.example.com.' +Oct 27 14:46:24.488: INFO: File jessie_udp@dns-test-service-3.dns-7598.svc.cluster.local from pod dns-7598/dns-test-164275d6-8912-4f34-b1f9-af98e452bec9 contains 'foo.example.com. +' instead of 'bar.example.com.' +Oct 27 14:46:24.488: INFO: Lookups using dns-7598/dns-test-164275d6-8912-4f34-b1f9-af98e452bec9 failed for: [wheezy_udp@dns-test-service-3.dns-7598.svc.cluster.local jessie_udp@dns-test-service-3.dns-7598.svc.cluster.local] + +Oct 27 14:46:29.393: INFO: File wheezy_udp@dns-test-service-3.dns-7598.svc.cluster.local from pod dns-7598/dns-test-164275d6-8912-4f34-b1f9-af98e452bec9 contains 'foo.example.com. +' instead of 'bar.example.com.' +Oct 27 14:46:29.487: INFO: File jessie_udp@dns-test-service-3.dns-7598.svc.cluster.local from pod dns-7598/dns-test-164275d6-8912-4f34-b1f9-af98e452bec9 contains 'foo.example.com. +' instead of 'bar.example.com.' +Oct 27 14:46:29.487: INFO: Lookups using dns-7598/dns-test-164275d6-8912-4f34-b1f9-af98e452bec9 failed for: [wheezy_udp@dns-test-service-3.dns-7598.svc.cluster.local jessie_udp@dns-test-service-3.dns-7598.svc.cluster.local] + +Oct 27 14:46:34.394: INFO: File wheezy_udp@dns-test-service-3.dns-7598.svc.cluster.local from pod dns-7598/dns-test-164275d6-8912-4f34-b1f9-af98e452bec9 contains 'foo.example.com. +' instead of 'bar.example.com.' +Oct 27 14:46:34.487: INFO: File jessie_udp@dns-test-service-3.dns-7598.svc.cluster.local from pod dns-7598/dns-test-164275d6-8912-4f34-b1f9-af98e452bec9 contains 'foo.example.com. +' instead of 'bar.example.com.' +Oct 27 14:46:34.487: INFO: Lookups using dns-7598/dns-test-164275d6-8912-4f34-b1f9-af98e452bec9 failed for: [wheezy_udp@dns-test-service-3.dns-7598.svc.cluster.local jessie_udp@dns-test-service-3.dns-7598.svc.cluster.local] + +Oct 27 14:46:39.486: INFO: DNS probes using dns-test-164275d6-8912-4f34-b1f9-af98e452bec9 succeeded + +STEP: deleting the pod +STEP: changing the service to type=ClusterIP +STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7598.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-7598.svc.cluster.local; sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7598.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-7598.svc.cluster.local; sleep 1; done + +STEP: creating a third pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Oct 27 14:46:42.515: INFO: DNS probes using dns-test-31dfc6e5-01e4-465f-a4ec-809f64efff29 succeeded + +STEP: deleting the pod +STEP: deleting the test externalName service +[AfterEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:46:42.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-7598" for this suite. +•{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":346,"completed":160,"skipped":2589,"failed":0} +SSSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:46:42.978: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename gc +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-1831 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the deployment +STEP: Wait for the Deployment to create new ReplicaSet +STEP: delete the deployment +STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs +STEP: Gathering metrics +W1027 14:46:44.734821 5725 metrics_grabber.go:151] Can't find kube-controller-manager pod. Grabbing metrics from kube-controller-manager is disabled. +Oct 27 14:46:44.734: INFO: For apiserver_request_total: +For apiserver_request_latency_seconds: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:46:44.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-1831" for this suite. +•{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":346,"completed":161,"skipped":2597,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:46:44.917: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-8657 +STEP: Waiting for a default service account to be provisioned in namespace +[It] optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name cm-test-opt-del-0c288b29-584b-43be-b209-2e43c14981ff +STEP: Creating configMap with name cm-test-opt-upd-0481f719-adc0-4600-949d-3981e9834515 +STEP: Creating the pod +Oct 27 14:46:46.109: INFO: The status of Pod pod-configmaps-5f8ded34-508f-4983-b109-682178c4ab99 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:46:48.201: INFO: The status of Pod pod-configmaps-5f8ded34-508f-4983-b109-682178c4ab99 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:46:50.202: INFO: The status of Pod pod-configmaps-5f8ded34-508f-4983-b109-682178c4ab99 is Running (Ready = true) +STEP: Deleting configmap cm-test-opt-del-0c288b29-584b-43be-b209-2e43c14981ff +STEP: Updating configmap cm-test-opt-upd-0481f719-adc0-4600-949d-3981e9834515 +STEP: Creating configMap with name cm-test-opt-create-85db4e78-f6c3-46c2-b90d-400bdf00bc5f +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:48:17.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-8657" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":162,"skipped":2620,"failed":0} +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should delete RS created by deployment when not orphaning [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:48:17.587: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename gc +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-2717 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should delete RS created by deployment when not orphaning [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the deployment +STEP: Wait for the Deployment to create new ReplicaSet +STEP: delete the deployment +STEP: wait for all rs to be garbage collected +STEP: Gathering metrics +Oct 27 14:48:18.959: INFO: For apiserver_request_total: +For apiserver_request_latency_seconds: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +W1027 14:48:18.959605 5725 metrics_grabber.go:151] Can't find kube-controller-manager pod. Grabbing metrics from kube-controller-manager is disabled. +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:48:18.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-2717" for this suite. +•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":346,"completed":163,"skipped":2640,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:48:19.142: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename gc +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-1804 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the rc1 +STEP: create the rc2 +STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well +STEP: delete the rc simpletest-rc-to-be-deleted +STEP: wait for the rc to be deleted +STEP: Gathering metrics +Oct 27 14:48:31.344: INFO: For apiserver_request_total: +For apiserver_request_latency_seconds: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +W1027 14:48:31.344501 5725 metrics_grabber.go:151] Can't find kube-controller-manager pod. Grabbing metrics from kube-controller-manager is disabled. +Oct 27 14:48:31.344: INFO: Deleting pod "simpletest-rc-to-be-deleted-2gms9" in namespace "gc-1804" +Oct 27 14:48:31.442: INFO: Deleting pod "simpletest-rc-to-be-deleted-4rxbg" in namespace "gc-1804" +Oct 27 14:48:31.536: INFO: Deleting pod "simpletest-rc-to-be-deleted-6vc4v" in namespace "gc-1804" +Oct 27 14:48:31.630: INFO: Deleting pod "simpletest-rc-to-be-deleted-8r6jf" in namespace "gc-1804" +Oct 27 14:48:31.724: INFO: Deleting pod "simpletest-rc-to-be-deleted-p4dth" in namespace "gc-1804" +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:48:31.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-1804" for this suite. +•{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":346,"completed":164,"skipped":2673,"failed":0} +SSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide podname only [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:48:32.000: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-903 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should provide podname only [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 27 14:48:32.828: INFO: Waiting up to 5m0s for pod "downwardapi-volume-117cc4ea-959f-4a0e-bc7b-155c636bb133" in namespace "projected-903" to be "Succeeded or Failed" +Oct 27 14:48:32.918: INFO: Pod "downwardapi-volume-117cc4ea-959f-4a0e-bc7b-155c636bb133": Phase="Pending", Reason="", readiness=false. Elapsed: 90.00938ms +Oct 27 14:48:35.010: INFO: Pod "downwardapi-volume-117cc4ea-959f-4a0e-bc7b-155c636bb133": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.181354859s +STEP: Saw pod success +Oct 27 14:48:35.010: INFO: Pod "downwardapi-volume-117cc4ea-959f-4a0e-bc7b-155c636bb133" satisfied condition "Succeeded or Failed" +Oct 27 14:48:35.100: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod downwardapi-volume-117cc4ea-959f-4a0e-bc7b-155c636bb133 container client-container: +STEP: delete the pod +Oct 27 14:48:35.292: INFO: Waiting for pod downwardapi-volume-117cc4ea-959f-4a0e-bc7b-155c636bb133 to disappear +Oct 27 14:48:35.382: INFO: Pod downwardapi-volume-117cc4ea-959f-4a0e-bc7b-155c636bb133 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:48:35.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-903" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":346,"completed":165,"skipped":2681,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with projected pod [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:48:35.653: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename subpath +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in subpath-3458 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] Atomic writer volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 +STEP: Setting up data +[It] should support subpaths with projected pod [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating pod pod-subpath-test-projected-q27w +STEP: Creating a pod to test atomic-volume-subpath +Oct 27 14:48:36.666: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-q27w" in namespace "subpath-3458" to be "Succeeded or Failed" +Oct 27 14:48:36.756: INFO: Pod "pod-subpath-test-projected-q27w": Phase="Pending", Reason="", readiness=false. Elapsed: 90.159933ms +Oct 27 14:48:38.847: INFO: Pod "pod-subpath-test-projected-q27w": Phase="Running", Reason="", readiness=true. Elapsed: 2.180819492s +Oct 27 14:48:40.938: INFO: Pod "pod-subpath-test-projected-q27w": Phase="Running", Reason="", readiness=true. Elapsed: 4.272591367s +Oct 27 14:48:43.030: INFO: Pod "pod-subpath-test-projected-q27w": Phase="Running", Reason="", readiness=true. Elapsed: 6.363758962s +Oct 27 14:48:45.121: INFO: Pod "pod-subpath-test-projected-q27w": Phase="Running", Reason="", readiness=true. Elapsed: 8.455288231s +Oct 27 14:48:47.213: INFO: Pod "pod-subpath-test-projected-q27w": Phase="Running", Reason="", readiness=true. Elapsed: 10.546963947s +Oct 27 14:48:49.305: INFO: Pod "pod-subpath-test-projected-q27w": Phase="Running", Reason="", readiness=true. Elapsed: 12.63895284s +Oct 27 14:48:51.396: INFO: Pod "pod-subpath-test-projected-q27w": Phase="Running", Reason="", readiness=true. Elapsed: 14.730339551s +Oct 27 14:48:53.487: INFO: Pod "pod-subpath-test-projected-q27w": Phase="Running", Reason="", readiness=true. Elapsed: 16.821336091s +Oct 27 14:48:55.579: INFO: Pod "pod-subpath-test-projected-q27w": Phase="Running", Reason="", readiness=true. Elapsed: 18.912986445s +Oct 27 14:48:57.670: INFO: Pod "pod-subpath-test-projected-q27w": Phase="Running", Reason="", readiness=true. Elapsed: 21.004616819s +Oct 27 14:48:59.763: INFO: Pod "pod-subpath-test-projected-q27w": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.097380614s +STEP: Saw pod success +Oct 27 14:48:59.763: INFO: Pod "pod-subpath-test-projected-q27w" satisfied condition "Succeeded or Failed" +Oct 27 14:48:59.854: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod pod-subpath-test-projected-q27w container test-container-subpath-projected-q27w: +STEP: delete the pod +Oct 27 14:49:00.047: INFO: Waiting for pod pod-subpath-test-projected-q27w to disappear +Oct 27 14:49:00.137: INFO: Pod pod-subpath-test-projected-q27w no longer exists +STEP: Deleting pod pod-subpath-test-projected-q27w +Oct 27 14:49:00.137: INFO: Deleting pod "pod-subpath-test-projected-q27w" in namespace "subpath-3458" +[AfterEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:49:00.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "subpath-3458" for this suite. +•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":346,"completed":166,"skipped":2696,"failed":0} +SSSSSSSSS +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:49:00.498: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-1111 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating secret with name secret-test-map-94329f8a-b3af-4b98-8ebb-5ee366bf9564 +STEP: Creating a pod to test consume secrets +Oct 27 14:49:01.418: INFO: Waiting up to 5m0s for pod "pod-secrets-5c23e98d-e172-43fb-bd7d-b1bdd1ce895a" in namespace "secrets-1111" to be "Succeeded or Failed" +Oct 27 14:49:01.509: INFO: Pod "pod-secrets-5c23e98d-e172-43fb-bd7d-b1bdd1ce895a": Phase="Pending", Reason="", readiness=false. Elapsed: 90.834131ms +Oct 27 14:49:03.601: INFO: Pod "pod-secrets-5c23e98d-e172-43fb-bd7d-b1bdd1ce895a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.182750461s +STEP: Saw pod success +Oct 27 14:49:03.601: INFO: Pod "pod-secrets-5c23e98d-e172-43fb-bd7d-b1bdd1ce895a" satisfied condition "Succeeded or Failed" +Oct 27 14:49:03.691: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod pod-secrets-5c23e98d-e172-43fb-bd7d-b1bdd1ce895a container secret-volume-test: +STEP: delete the pod +Oct 27 14:49:03.923: INFO: Waiting for pod pod-secrets-5c23e98d-e172-43fb-bd7d-b1bdd1ce895a to disappear +Oct 27 14:49:04.013: INFO: Pod pod-secrets-5c23e98d-e172-43fb-bd7d-b1bdd1ce895a no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:49:04.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-1111" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":167,"skipped":2705,"failed":0} +SSSS +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:49:04.284: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-1832 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating secret with name secret-test-map-92488b36-3f15-42b8-a00f-555fa2545a3f +STEP: Creating a pod to test consume secrets +Oct 27 14:49:05.204: INFO: Waiting up to 5m0s for pod "pod-secrets-011f9f51-036a-4bb6-86c9-a6a75a785da6" in namespace "secrets-1832" to be "Succeeded or Failed" +Oct 27 14:49:05.294: INFO: Pod "pod-secrets-011f9f51-036a-4bb6-86c9-a6a75a785da6": Phase="Pending", Reason="", readiness=false. Elapsed: 90.20248ms +Oct 27 14:49:07.385: INFO: Pod "pod-secrets-011f9f51-036a-4bb6-86c9-a6a75a785da6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.18150946s +STEP: Saw pod success +Oct 27 14:49:07.385: INFO: Pod "pod-secrets-011f9f51-036a-4bb6-86c9-a6a75a785da6" satisfied condition "Succeeded or Failed" +Oct 27 14:49:07.476: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod pod-secrets-011f9f51-036a-4bb6-86c9-a6a75a785da6 container secret-volume-test: +STEP: delete the pod +Oct 27 14:49:07.707: INFO: Waiting for pod pod-secrets-011f9f51-036a-4bb6-86c9-a6a75a785da6 to disappear +Oct 27 14:49:07.797: INFO: Pod pod-secrets-011f9f51-036a-4bb6-86c9-a6a75a785da6 no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:49:07.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-1832" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":346,"completed":168,"skipped":2709,"failed":0} +SSSSSSSSS +------------------------------ +[sig-node] InitContainer [NodeConformance] + should invoke init containers on a RestartAlways pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:49:08.069: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename init-container +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in init-container-3226 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 +[It] should invoke init containers on a RestartAlways pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pod +Oct 27 14:49:08.802: INFO: PodSpec: initContainers in spec.initContainers +[AfterEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:49:14.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "init-container-3226" for this suite. +•{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":346,"completed":169,"skipped":2718,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl server-side dry-run + should check if kubectl can dry-run update Pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:49:14.559: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-25 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should check if kubectl can dry-run update Pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 +Oct 27 14:49:15.294: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-25 run e2e-test-httpd-pod --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 --pod-running-timeout=2m0s --labels=run=e2e-test-httpd-pod' +Oct 27 14:49:16.165: INFO: stderr: "" +Oct 27 14:49:16.165: INFO: stdout: "pod/e2e-test-httpd-pod created\n" +STEP: replace the image in the pod with server-side dry-run +Oct 27 14:49:16.165: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-25 patch pod e2e-test-httpd-pod -p {"spec":{"containers":[{"name": "e2e-test-httpd-pod","image": "k8s.gcr.io/e2e-test-images/busybox:1.29-1"}]}} --dry-run=server' +Oct 27 14:49:17.117: INFO: stderr: "" +Oct 27 14:49:17.117: INFO: stdout: "pod/e2e-test-httpd-pod patched\n" +STEP: verifying the pod e2e-test-httpd-pod has the right image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 +Oct 27 14:49:17.208: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-25 delete pods e2e-test-httpd-pod' +Oct 27 14:49:20.483: INFO: stderr: "" +Oct 27 14:49:20.483: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:49:20.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-25" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":346,"completed":170,"skipped":2763,"failed":0} +SSSSSS +------------------------------ +[sig-node] Variable Expansion + should allow substituting values in a volume subpath [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:49:20.753: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename var-expansion +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in var-expansion-8641 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should allow substituting values in a volume subpath [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test substitution in volume subpath +Oct 27 14:49:21.582: INFO: Waiting up to 5m0s for pod "var-expansion-4e847c9c-1fa3-4d5c-8e0d-ae4625719f2d" in namespace "var-expansion-8641" to be "Succeeded or Failed" +Oct 27 14:49:21.673: INFO: Pod "var-expansion-4e847c9c-1fa3-4d5c-8e0d-ae4625719f2d": Phase="Pending", Reason="", readiness=false. Elapsed: 90.281808ms +Oct 27 14:49:23.764: INFO: Pod "var-expansion-4e847c9c-1fa3-4d5c-8e0d-ae4625719f2d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.182001119s +STEP: Saw pod success +Oct 27 14:49:23.764: INFO: Pod "var-expansion-4e847c9c-1fa3-4d5c-8e0d-ae4625719f2d" satisfied condition "Succeeded or Failed" +Oct 27 14:49:23.854: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod var-expansion-4e847c9c-1fa3-4d5c-8e0d-ae4625719f2d container dapi-container: +STEP: delete the pod +Oct 27 14:49:24.043: INFO: Waiting for pod var-expansion-4e847c9c-1fa3-4d5c-8e0d-ae4625719f2d to disappear +Oct 27 14:49:24.133: INFO: Pod var-expansion-4e847c9c-1fa3-4d5c-8e0d-ae4625719f2d no longer exists +[AfterEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:49:24.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-8641" for this suite. +•{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]","total":346,"completed":171,"skipped":2769,"failed":0} +S +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:49:24.404: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-499 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0644 on tmpfs +Oct 27 14:49:25.231: INFO: Waiting up to 5m0s for pod "pod-d24f51db-053e-4c1e-831d-d14997b2173d" in namespace "emptydir-499" to be "Succeeded or Failed" +Oct 27 14:49:25.321: INFO: Pod "pod-d24f51db-053e-4c1e-831d-d14997b2173d": Phase="Pending", Reason="", readiness=false. Elapsed: 90.36817ms +Oct 27 14:49:27.412: INFO: Pod "pod-d24f51db-053e-4c1e-831d-d14997b2173d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.181778037s +STEP: Saw pod success +Oct 27 14:49:27.412: INFO: Pod "pod-d24f51db-053e-4c1e-831d-d14997b2173d" satisfied condition "Succeeded or Failed" +Oct 27 14:49:27.503: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod pod-d24f51db-053e-4c1e-831d-d14997b2173d container test-container: +STEP: delete the pod +Oct 27 14:49:27.692: INFO: Waiting for pod pod-d24f51db-053e-4c1e-831d-d14997b2173d to disappear +Oct 27 14:49:27.782: INFO: Pod pod-d24f51db-053e-4c1e-831d-d14997b2173d no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:49:27.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-499" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":172,"skipped":2770,"failed":0} + +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should be able to deny custom resource creation, update and deletion [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:49:28.053: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-953 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 27 14:49:29.916: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942969, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942969, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942969, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942969, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 27 14:49:33.103: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should be able to deny custom resource creation, update and deletion [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:49:33.193: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Registering the custom resource webhook via the AdmissionRegistration API +STEP: Creating a custom resource that should be denied by the webhook +STEP: Creating a custom resource whose deletion would be denied by the webhook +STEP: Updating the custom resource with disallowed data should be denied +STEP: Deleting the custom resource should be denied +STEP: Remove the offending key and value from the custom resource data +STEP: Deleting the updated custom resource should be successful +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:49:36.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-953" for this suite. +STEP: Destroying namespace "webhook-953-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":346,"completed":173,"skipped":2770,"failed":0} +SSS +------------------------------ +[sig-apps] Daemon set [Serial] + should run and stop complex daemon [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:49:37.605: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename daemonsets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in daemonsets-4984 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:142 +[It] should run and stop complex daemon [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:49:38.707: INFO: Creating daemon "daemon-set" with a node selector +STEP: Initially, daemon pods should not be running on any nodes. +Oct 27 14:49:38.888: INFO: Number of nodes with available pods: 0 +Oct 27 14:49:38.888: INFO: Number of running nodes: 0, number of available pods: 0 +STEP: Change node label to blue, check that daemon pod is launched. +Oct 27 14:49:39.341: INFO: Number of nodes with available pods: 0 +Oct 27 14:49:39.341: INFO: Node ip-10-250-28-25.ec2.internal is running more than one daemon pod +Oct 27 14:49:40.432: INFO: Number of nodes with available pods: 0 +Oct 27 14:49:40.432: INFO: Node ip-10-250-28-25.ec2.internal is running more than one daemon pod +Oct 27 14:49:41.432: INFO: Number of nodes with available pods: 1 +Oct 27 14:49:41.432: INFO: Number of running nodes: 1, number of available pods: 1 +STEP: Update the node label to green, and wait for daemons to be unscheduled +Oct 27 14:49:41.884: INFO: Number of nodes with available pods: 0 +Oct 27 14:49:41.884: INFO: Number of running nodes: 0, number of available pods: 0 +STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate +Oct 27 14:49:42.066: INFO: Number of nodes with available pods: 0 +Oct 27 14:49:42.066: INFO: Node ip-10-250-28-25.ec2.internal is running more than one daemon pod +Oct 27 14:49:43.158: INFO: Number of nodes with available pods: 0 +Oct 27 14:49:43.158: INFO: Node ip-10-250-28-25.ec2.internal is running more than one daemon pod +Oct 27 14:49:44.157: INFO: Number of nodes with available pods: 0 +Oct 27 14:49:44.157: INFO: Node ip-10-250-28-25.ec2.internal is running more than one daemon pod +Oct 27 14:49:45.158: INFO: Number of nodes with available pods: 0 +Oct 27 14:49:45.158: INFO: Node ip-10-250-28-25.ec2.internal is running more than one daemon pod +Oct 27 14:49:46.158: INFO: Number of nodes with available pods: 0 +Oct 27 14:49:46.158: INFO: Node ip-10-250-28-25.ec2.internal is running more than one daemon pod +Oct 27 14:49:47.157: INFO: Number of nodes with available pods: 1 +Oct 27 14:49:47.157: INFO: Number of running nodes: 1, number of available pods: 1 +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:108 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4984, will wait for the garbage collector to delete the pods +Oct 27 14:49:47.620: INFO: Deleting DaemonSet.extensions daemon-set took: 91.724101ms +Oct 27 14:49:47.721: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.650699ms +Oct 27 14:49:49.412: INFO: Number of nodes with available pods: 0 +Oct 27 14:49:49.412: INFO: Number of running nodes: 0, number of available pods: 0 +Oct 27 14:49:49.502: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"24254"},"items":null} + +Oct 27 14:49:49.592: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"24254"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:49:50.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-4984" for this suite. +•{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":346,"completed":174,"skipped":2773,"failed":0} +SSS +------------------------------ +[sig-apps] CronJob + should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:49:50.235: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename cronjob +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in cronjob-2865 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a ForbidConcurrent cronjob +STEP: Ensuring a job is scheduled +STEP: Ensuring exactly one is scheduled +STEP: Ensuring exactly one running job exists by listing jobs explicitly +STEP: Ensuring no more jobs are scheduled +STEP: Removing cronjob +[AfterEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:55:01.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "cronjob-2865" for this suite. + +• [SLOW TEST:311.640 seconds] +[sig-apps] CronJob +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","total":346,"completed":175,"skipped":2776,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:55:01.875: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-1956 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0777 on tmpfs +Oct 27 14:55:02.704: INFO: Waiting up to 5m0s for pod "pod-373f6010-e661-439d-b039-c4eeb5d89cf0" in namespace "emptydir-1956" to be "Succeeded or Failed" +Oct 27 14:55:02.794: INFO: Pod "pod-373f6010-e661-439d-b039-c4eeb5d89cf0": Phase="Pending", Reason="", readiness=false. Elapsed: 90.253443ms +Oct 27 14:55:04.885: INFO: Pod "pod-373f6010-e661-439d-b039-c4eeb5d89cf0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.181283453s +STEP: Saw pod success +Oct 27 14:55:04.885: INFO: Pod "pod-373f6010-e661-439d-b039-c4eeb5d89cf0" satisfied condition "Succeeded or Failed" +Oct 27 14:55:04.976: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod pod-373f6010-e661-439d-b039-c4eeb5d89cf0 container test-container: +STEP: delete the pod +Oct 27 14:55:05.208: INFO: Waiting for pod pod-373f6010-e661-439d-b039-c4eeb5d89cf0 to disappear +Oct 27 14:55:05.299: INFO: Pod pod-373f6010-e661-439d-b039-c4eeb5d89cf0 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:55:05.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-1956" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":176,"skipped":2812,"failed":0} +SS +------------------------------ +[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook + should execute prestop http hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Container Lifecycle Hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:55:05.569: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-lifecycle-hook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-lifecycle-hook-3431 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] when create a pod with lifecycle hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 +STEP: create the container to handle the HTTPGet hook request. +Oct 27 14:55:06.510: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:55:08.602: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) +[It] should execute prestop http hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the pod with lifecycle hook +Oct 27 14:55:08.878: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:55:10.969: INFO: The status of Pod pod-with-prestop-http-hook is Running (Ready = true) +STEP: delete the pod with lifecycle hook +Oct 27 14:55:11.151: INFO: Waiting for pod pod-with-prestop-http-hook to disappear +Oct 27 14:55:11.242: INFO: Pod pod-with-prestop-http-hook still exists +Oct 27 14:55:13.242: INFO: Waiting for pod pod-with-prestop-http-hook to disappear +Oct 27 14:55:13.333: INFO: Pod pod-with-prestop-http-hook still exists +Oct 27 14:55:15.242: INFO: Waiting for pod pod-with-prestop-http-hook to disappear +Oct 27 14:55:15.333: INFO: Pod pod-with-prestop-http-hook no longer exists +STEP: check prestop hook +[AfterEach] [sig-node] Container Lifecycle Hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:55:15.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-lifecycle-hook-3431" for this suite. +•{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":346,"completed":177,"skipped":2814,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicationController + should surface a failure condition on a common issue like exceeded quota [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:55:15.777: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename replication-controller +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replication-controller-6116 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 +[It] should surface a failure condition on a common issue like exceeded quota [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:55:16.508: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace +STEP: Creating rc "condition-test" that asks for more than the allowed pod quota +STEP: Checking rc "condition-test" has the desired failure condition set +STEP: Scaling down rc "condition-test" to satisfy pod quota +Oct 27 14:55:17.051: INFO: Updating replication controller "condition-test" +STEP: Checking rc "condition-test" has no failure condition set +[AfterEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:55:17.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replication-controller-6116" for this suite. +•{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":346,"completed":178,"skipped":2844,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl cluster-info + should check if Kubernetes control plane services is included in cluster-info [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:55:17.324: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-960 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should check if Kubernetes control plane services is included in cluster-info [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: validating cluster-info +Oct 27 14:55:18.079: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-960 cluster-info' +Oct 27 14:55:18.411: INFO: stderr: "" +Oct 27 14:55:18.411: INFO: stdout: "\x1b[0;32mKubernetes control plane\x1b[0m is running at \x1b[0;33mhttps://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:55:18.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-960" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info [Conformance]","total":346,"completed":179,"skipped":2859,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should be able to create a functioning NodePort service [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:55:18.595: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-6302 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should be able to create a functioning NodePort service [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating service nodeport-test with type=NodePort in namespace services-6302 +STEP: creating replication controller nodeport-test in namespace services-6302 +I1027 14:55:19.518825 5725 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-6302, replica count: 2 +Oct 27 14:55:22.619: INFO: Creating new exec pod +I1027 14:55:22.619766 5725 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Oct 27 14:55:26.075: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-6302 exec execpodtnlv2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' +Oct 27 14:55:27.101: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n" +Oct 27 14:55:27.102: INFO: stdout: "" +Oct 27 14:55:28.102: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-6302 exec execpodtnlv2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' +Oct 27 14:55:29.174: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n" +Oct 27 14:55:29.174: INFO: stdout: "nodeport-test-vb2rq" +Oct 27 14:55:29.174: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-6302 exec execpodtnlv2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.71.175.170 80' +Oct 27 14:55:30.189: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 100.71.175.170 80\nConnection to 100.71.175.170 80 port [tcp/http] succeeded!\n" +Oct 27 14:55:30.189: INFO: stdout: "nodeport-test-vb2rq" +Oct 27 14:55:30.189: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-6302 exec execpodtnlv2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.250.28.25 32196' +Oct 27 14:55:31.183: INFO: stderr: "+ nc -v -t -w 2 10.250.28.25 32196\n+ echo hostName\nConnection to 10.250.28.25 32196 port [tcp/*] succeeded!\n" +Oct 27 14:55:31.183: INFO: stdout: "" +Oct 27 14:55:32.183: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-6302 exec execpodtnlv2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.250.28.25 32196' +Oct 27 14:55:33.246: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.250.28.25 32196\nConnection to 10.250.28.25 32196 port [tcp/*] succeeded!\n" +Oct 27 14:55:33.246: INFO: stdout: "nodeport-test-kdjng" +Oct 27 14:55:33.246: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-6302 exec execpodtnlv2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.250.9.48 32196' +Oct 27 14:55:34.276: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.250.9.48 32196\nConnection to 10.250.9.48 32196 port [tcp/*] succeeded!\n" +Oct 27 14:55:34.276: INFO: stdout: "nodeport-test-vb2rq" +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:55:34.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-6302" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":346,"completed":180,"skipped":2913,"failed":0} +SSSSSSS +------------------------------ +[sig-apps] CronJob + should replace jobs when ReplaceConcurrent [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:55:34.547: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename cronjob +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in cronjob-2929 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should replace jobs when ReplaceConcurrent [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a ReplaceConcurrent cronjob +STEP: Ensuring a job is scheduled +STEP: Ensuring exactly one is scheduled +STEP: Ensuring exactly one running job exists by listing jobs explicitly +STEP: Ensuring the job is replaced with a new one +STEP: Removing cronjob +[AfterEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:57:01.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "cronjob-2929" for this suite. +•{"msg":"PASSED [sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","total":346,"completed":181,"skipped":2920,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + listing validating webhooks should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:57:02.096: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-3244 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 27 14:57:03.947: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943423, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943423, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943423, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943423, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 27 14:57:07.134: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] listing validating webhooks should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Listing all of the created validation webhooks +STEP: Creating a configMap that does not comply to the validation webhook rules +STEP: Deleting the collection of validation webhooks +STEP: Creating a configMap that does not comply to the validation webhook rules +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:57:08.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-3244" for this suite. +STEP: Destroying namespace "webhook-3244-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":346,"completed":182,"skipped":2977,"failed":0} +SSS +------------------------------ +[sig-cli] Kubectl client Proxy server + should support --unix-socket=/path [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:57:09.562: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-555 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should support --unix-socket=/path [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Starting the proxy +Oct 27 14:57:10.294: INFO: Asynchronously running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-555 proxy --unix-socket=/tmp/kubectl-proxy-unix597735584/test' +STEP: retrieving proxy /api/ output +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:57:10.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-555" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":346,"completed":183,"skipped":2980,"failed":0} +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Pods + should support remote command execution over websockets [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:57:10.523: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename pods +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-6021 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:188 +[It] should support remote command execution over websockets [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:57:11.254: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: creating the pod +STEP: submitting the pod to kubernetes +Oct 27 14:57:11.442: INFO: The status of Pod pod-exec-websocket-4d9692f7-2939-4e17-8977-2142de80ffac is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:57:13.533: INFO: The status of Pod pod-exec-websocket-4d9692f7-2939-4e17-8977-2142de80ffac is Running (Ready = true) +[AfterEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:57:13.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-6021" for this suite. +•{"msg":"PASSED [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":346,"completed":184,"skipped":3002,"failed":0} +SSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:57:14.302: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-286 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name projected-configmap-test-volume-map-1868d0f1-450a-432c-8dd0-6bbfb2754d86 +STEP: Creating a pod to test consume configMaps +Oct 27 14:57:15.247: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-31ab2fc6-9d36-41c1-9129-424a397d0dfc" in namespace "projected-286" to be "Succeeded or Failed" +Oct 27 14:57:15.337: INFO: Pod "pod-projected-configmaps-31ab2fc6-9d36-41c1-9129-424a397d0dfc": Phase="Pending", Reason="", readiness=false. Elapsed: 90.213524ms +Oct 27 14:57:17.431: INFO: Pod "pod-projected-configmaps-31ab2fc6-9d36-41c1-9129-424a397d0dfc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.184062674s +STEP: Saw pod success +Oct 27 14:57:17.431: INFO: Pod "pod-projected-configmaps-31ab2fc6-9d36-41c1-9129-424a397d0dfc" satisfied condition "Succeeded or Failed" +Oct 27 14:57:17.521: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod pod-projected-configmaps-31ab2fc6-9d36-41c1-9129-424a397d0dfc container agnhost-container: +STEP: delete the pod +Oct 27 14:57:17.713: INFO: Waiting for pod pod-projected-configmaps-31ab2fc6-9d36-41c1-9129-424a397d0dfc to disappear +Oct 27 14:57:17.804: INFO: Pod pod-projected-configmaps-31ab2fc6-9d36-41c1-9129-424a397d0dfc no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:57:17.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-286" for this suite. +•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":346,"completed":185,"skipped":3009,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-auth] Certificates API [Privileged:ClusterAdmin] + should support CSR API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:57:18.075: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename certificates +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in certificates-5584 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support CSR API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: getting /apis +STEP: getting /apis/certificates.k8s.io +STEP: getting /apis/certificates.k8s.io/v1 +STEP: creating +STEP: getting +STEP: listing +STEP: watching +Oct 27 14:57:20.253: INFO: starting watch +STEP: patching +STEP: updating +Oct 27 14:57:20.524: INFO: waiting for watch events with expected annotations +Oct 27 14:57:20.524: INFO: saw patched and updated annotations +STEP: getting /approval +STEP: patching /approval +STEP: updating /approval +STEP: getting /status +STEP: patching /status +STEP: updating /status +STEP: deleting +STEP: deleting a collection +[AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:57:21.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "certificates-5584" for this suite. +•{"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":346,"completed":186,"skipped":3061,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] RuntimeClass + should support RuntimeClasses API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] RuntimeClass + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:57:21.798: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename runtimeclass +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in runtimeclass-4074 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support RuntimeClasses API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: getting /apis +STEP: getting /apis/node.k8s.io +STEP: getting /apis/node.k8s.io/v1 +STEP: creating +STEP: watching +Oct 27 14:57:23.162: INFO: starting watch +STEP: getting +STEP: listing +STEP: patching +STEP: updating +Oct 27 14:57:23.706: INFO: waiting for watch events with expected annotations +STEP: deleting +STEP: deleting a collection +[AfterEach] [sig-node] RuntimeClass + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:57:24.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "runtimeclass-4074" for this suite. +•{"msg":"PASSED [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance]","total":346,"completed":187,"skipped":3085,"failed":0} +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl label + should update the label on a resource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:57:24.345: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-4979 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[BeforeEach] Kubectl label + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1318 +STEP: creating the pod +Oct 27 14:57:25.077: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-4979 create -f -' +Oct 27 14:57:25.602: INFO: stderr: "" +Oct 27 14:57:25.602: INFO: stdout: "pod/pause created\n" +Oct 27 14:57:25.602: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] +Oct 27 14:57:25.602: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-4979" to be "running and ready" +Oct 27 14:57:25.693: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 90.309383ms +Oct 27 14:57:27.786: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 2.183162106s +Oct 27 14:57:27.786: INFO: Pod "pause" satisfied condition "running and ready" +Oct 27 14:57:27.786: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] +[It] should update the label on a resource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: adding the label testing-label with value testing-label-value to a pod +Oct 27 14:57:27.786: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-4979 label pods pause testing-label=testing-label-value' +Oct 27 14:57:28.213: INFO: stderr: "" +Oct 27 14:57:28.213: INFO: stdout: "pod/pause labeled\n" +STEP: verifying the pod has the label testing-label with the value testing-label-value +Oct 27 14:57:28.213: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-4979 get pod pause -L testing-label' +Oct 27 14:57:28.536: INFO: stderr: "" +Oct 27 14:57:28.536: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 3s testing-label-value\n" +STEP: removing the label testing-label of a pod +Oct 27 14:57:28.536: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-4979 label pods pause testing-label-' +Oct 27 14:57:28.952: INFO: stderr: "" +Oct 27 14:57:28.952: INFO: stdout: "pod/pause labeled\n" +STEP: verifying the pod doesn't have the label testing-label +Oct 27 14:57:28.952: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-4979 get pod pause -L testing-label' +Oct 27 14:57:29.274: INFO: stderr: "" +Oct 27 14:57:29.275: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" +[AfterEach] Kubectl label + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1324 +STEP: using delete to clean up resources +Oct 27 14:57:29.275: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-4979 delete --grace-period=0 --force -f -' +Oct 27 14:57:29.691: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Oct 27 14:57:29.691: INFO: stdout: "pod \"pause\" force deleted\n" +Oct 27 14:57:29.691: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-4979 get rc,svc -l name=pause --no-headers' +Oct 27 14:57:30.108: INFO: stderr: "No resources found in kubectl-4979 namespace.\n" +Oct 27 14:57:30.109: INFO: stdout: "" +Oct 27 14:57:30.109: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-4979 get pods -l name=pause -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' +Oct 27 14:57:30.439: INFO: stderr: "" +Oct 27 14:57:30.439: INFO: stdout: "" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:57:30.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-4979" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":346,"completed":188,"skipped":3106,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should deny crd creation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:57:30.709: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-4202 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 27 14:57:32.469: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943452, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943452, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943452, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943452, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 27 14:57:35.655: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should deny crd creation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Registering the crd webhook via the AdmissionRegistration API +STEP: Creating a custom resource definition that should be denied by the webhook +Oct 27 14:57:36.182: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:57:36.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-4202" for this suite. +STEP: Destroying namespace "webhook-4202-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":346,"completed":189,"skipped":3137,"failed":0} +SSSSSSSSS +------------------------------ +[sig-node] InitContainer [NodeConformance] + should not start app containers if init containers fail on a RestartAlways pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:57:37.202: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename init-container +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in init-container-2421 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 +[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pod +Oct 27 14:57:37.942: INFO: PodSpec: initContainers in spec.initContainers +Oct 27 14:58:26.362: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-d55a8a07-a7c1-434c-b381-f37ac2c0b4d6", GenerateName:"", Namespace:"init-container-2421", SelfLink:"", UID:"706fdffd-cf11-49aa-8994-df7fd24d56d8", ResourceVersion:"27188", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63770943457, loc:(*time.Location)(0xa09bc80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"942050746"}, Annotations:map[string]string{"cni.projectcalico.org/containerID":"2ff9098bbd4c21c1cd5e3cbc35859abd0e1c4700cd8dda6e64978f0ba5725f90", "cni.projectcalico.org/podIP":"100.96.1.225/32", "cni.projectcalico.org/podIPs":"100.96.1.225/32", "kubernetes.io/psp":"e2e-test-privileged-psp"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004ec6138), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004ec6150), Subresource:""}, v1.ManagedFieldsEntry{Manager:"calico", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004ec6168), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004ec6180), Subresource:"status"}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004ec6198), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004ec61b0), Subresource:"status"}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-api-access-fjmvf", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc002ac2040), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"KUBERNETES_SERVICE_HOST", Value:"api.tm94z-0j6.it.internal.staging.k8s.ondemand.com", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-fjmvf", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"KUBERNETES_SERVICE_HOST", Value:"api.tm94z-0j6.it.internal.staging.k8s.ondemand.com", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-fjmvf", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.5", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"KUBERNETES_SERVICE_HOST", Value:"api.tm94z-0j6.it.internal.staging.k8s.ondemand.com", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-fjmvf", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0008c6428), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"ip-10-250-28-25.ec2.internal", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0005ca070), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0008c64a0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0008c64c0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0008c64c8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0008c64cc), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc002dea100), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943458, loc:(*time.Location)(0xa09bc80)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943458, loc:(*time.Location)(0xa09bc80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943458, loc:(*time.Location)(0xa09bc80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943457, loc:(*time.Location)(0xa09bc80)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.250.28.25", PodIP:"100.96.1.225", PodIPs:[]v1.PodIP{v1.PodIP{IP:"100.96.1.225"}}, StartTime:(*v1.Time)(0xc004ec61e0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0005ca3f0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0005ca5b0)}, Ready:false, RestartCount:3, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"docker-pullable://k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592", ContainerID:"docker://15b1231bf530d26a4284a6fb209f2109bb58121c0f1d4f99a712b7f7835c6238", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002ac2120), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002ac2100), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.5", ImageID:"", ContainerID:"", Started:(*bool)(0xc0008c656f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} +[AfterEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:58:26.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "init-container-2421" for this suite. +•{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":346,"completed":190,"skipped":3146,"failed":0} +SSS +------------------------------ +[sig-network] Services + should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:58:26.632: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-662 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating service in namespace services-662 +STEP: creating service affinity-clusterip-transition in namespace services-662 +STEP: creating replication controller affinity-clusterip-transition in namespace services-662 +I1027 14:58:27.572860 5725 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-662, replica count: 3 +I1027 14:58:30.674321 5725 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Oct 27 14:58:30.854: INFO: Creating new exec pod +Oct 27 14:58:34.133: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-662 exec execpod-affinityd2dxq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' +Oct 27 14:58:35.136: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" +Oct 27 14:58:35.136: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 14:58:35.136: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-662 exec execpod-affinityd2dxq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.70.127.154 80' +Oct 27 14:58:36.200: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 100.70.127.154 80\nConnection to 100.70.127.154 80 port [tcp/http] succeeded!\n" +Oct 27 14:58:36.200: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 14:58:36.383: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-662 exec execpod-affinityd2dxq -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://100.70.127.154:80/ ; done' +Oct 27 14:58:37.486: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.70.127.154:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.70.127.154:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.70.127.154:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.70.127.154:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.70.127.154:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.70.127.154:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.70.127.154:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.70.127.154:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.70.127.154:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.70.127.154:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.70.127.154:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.70.127.154:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.70.127.154:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.70.127.154:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.70.127.154:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.70.127.154:80/\n" +Oct 27 14:58:37.486: INFO: stdout: "\naffinity-clusterip-transition-gmdqk\naffinity-clusterip-transition-49twf\naffinity-clusterip-transition-gmdqk\naffinity-clusterip-transition-qs6n8\naffinity-clusterip-transition-gmdqk\naffinity-clusterip-transition-49twf\naffinity-clusterip-transition-gmdqk\naffinity-clusterip-transition-49twf\naffinity-clusterip-transition-gmdqk\naffinity-clusterip-transition-49twf\naffinity-clusterip-transition-qs6n8\naffinity-clusterip-transition-qs6n8\naffinity-clusterip-transition-qs6n8\naffinity-clusterip-transition-qs6n8\naffinity-clusterip-transition-qs6n8\naffinity-clusterip-transition-qs6n8" +Oct 27 14:58:37.486: INFO: Received response from host: affinity-clusterip-transition-gmdqk +Oct 27 14:58:37.486: INFO: Received response from host: affinity-clusterip-transition-49twf +Oct 27 14:58:37.486: INFO: Received response from host: affinity-clusterip-transition-gmdqk +Oct 27 14:58:37.486: INFO: Received response from host: affinity-clusterip-transition-qs6n8 +Oct 27 14:58:37.486: INFO: Received response from host: affinity-clusterip-transition-gmdqk +Oct 27 14:58:37.486: INFO: Received response from host: affinity-clusterip-transition-49twf +Oct 27 14:58:37.486: INFO: Received response from host: affinity-clusterip-transition-gmdqk +Oct 27 14:58:37.486: INFO: Received response from host: affinity-clusterip-transition-49twf +Oct 27 14:58:37.486: INFO: Received response from host: affinity-clusterip-transition-gmdqk +Oct 27 14:58:37.486: INFO: Received response from host: affinity-clusterip-transition-49twf +Oct 27 14:58:37.486: INFO: Received response from host: affinity-clusterip-transition-qs6n8 +Oct 27 14:58:37.486: INFO: Received response from host: affinity-clusterip-transition-qs6n8 +Oct 27 14:58:37.486: INFO: Received response from host: affinity-clusterip-transition-qs6n8 +Oct 27 14:58:37.486: INFO: Received response from host: affinity-clusterip-transition-qs6n8 +Oct 27 14:58:37.486: INFO: Received response from host: affinity-clusterip-transition-qs6n8 +Oct 27 14:58:37.486: INFO: Received response from host: affinity-clusterip-transition-qs6n8 +Oct 27 14:58:37.703: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-662 exec execpod-affinityd2dxq -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://100.70.127.154:80/ ; done' +Oct 27 14:58:38.900: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.70.127.154:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.70.127.154:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.70.127.154:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.70.127.154:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.70.127.154:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.70.127.154:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.70.127.154:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.70.127.154:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.70.127.154:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.70.127.154:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.70.127.154:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.70.127.154:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.70.127.154:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.70.127.154:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.70.127.154:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.70.127.154:80/\n" +Oct 27 14:58:38.900: INFO: stdout: "\naffinity-clusterip-transition-49twf\naffinity-clusterip-transition-49twf\naffinity-clusterip-transition-49twf\naffinity-clusterip-transition-49twf\naffinity-clusterip-transition-49twf\naffinity-clusterip-transition-49twf\naffinity-clusterip-transition-49twf\naffinity-clusterip-transition-49twf\naffinity-clusterip-transition-49twf\naffinity-clusterip-transition-49twf\naffinity-clusterip-transition-49twf\naffinity-clusterip-transition-49twf\naffinity-clusterip-transition-49twf\naffinity-clusterip-transition-49twf\naffinity-clusterip-transition-49twf\naffinity-clusterip-transition-49twf" +Oct 27 14:58:38.900: INFO: Received response from host: affinity-clusterip-transition-49twf +Oct 27 14:58:38.900: INFO: Received response from host: affinity-clusterip-transition-49twf +Oct 27 14:58:38.900: INFO: Received response from host: affinity-clusterip-transition-49twf +Oct 27 14:58:38.900: INFO: Received response from host: affinity-clusterip-transition-49twf +Oct 27 14:58:38.900: INFO: Received response from host: affinity-clusterip-transition-49twf +Oct 27 14:58:38.900: INFO: Received response from host: affinity-clusterip-transition-49twf +Oct 27 14:58:38.900: INFO: Received response from host: affinity-clusterip-transition-49twf +Oct 27 14:58:38.900: INFO: Received response from host: affinity-clusterip-transition-49twf +Oct 27 14:58:38.900: INFO: Received response from host: affinity-clusterip-transition-49twf +Oct 27 14:58:38.900: INFO: Received response from host: affinity-clusterip-transition-49twf +Oct 27 14:58:38.900: INFO: Received response from host: affinity-clusterip-transition-49twf +Oct 27 14:58:38.900: INFO: Received response from host: affinity-clusterip-transition-49twf +Oct 27 14:58:38.900: INFO: Received response from host: affinity-clusterip-transition-49twf +Oct 27 14:58:38.900: INFO: Received response from host: affinity-clusterip-transition-49twf +Oct 27 14:58:38.900: INFO: Received response from host: affinity-clusterip-transition-49twf +Oct 27 14:58:38.900: INFO: Received response from host: affinity-clusterip-transition-49twf +Oct 27 14:58:38.900: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-662, will wait for the garbage collector to delete the pods +Oct 27 14:58:39.293: INFO: Deleting ReplicationController affinity-clusterip-transition took: 90.918321ms +Oct 27 14:58:39.394: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 100.863964ms +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:58:41.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-662" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":346,"completed":191,"skipped":3149,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-network] EndpointSlice + should have Endpoints and EndpointSlices pointing to API Server [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:58:42.074: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename endpointslice +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in endpointslice-2396 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 +[It] should have Endpoints and EndpointSlices pointing to API Server [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:58:42.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "endpointslice-2396" for this suite. +•{"msg":"PASSED [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","total":346,"completed":192,"skipped":3163,"failed":0} +SS +------------------------------ +[sig-network] Services + should find a service from listing all namespaces [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:58:43.172: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-4217 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should find a service from listing all namespaces [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: fetching services +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:58:43.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-4217" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":346,"completed":193,"skipped":3165,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] EndpointSlice + should support creating EndpointSlice API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:58:44.177: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename endpointslice +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in endpointslice-4736 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 +[It] should support creating EndpointSlice API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: getting /apis +STEP: getting /apis/discovery.k8s.io +STEP: getting /apis/discovery.k8s.iov1 +STEP: creating +STEP: getting +STEP: listing +STEP: watching +Oct 27 14:58:45.719: INFO: starting watch +STEP: cluster-wide listing +STEP: cluster-wide watching +Oct 27 14:58:45.899: INFO: starting watch +STEP: patching +STEP: updating +Oct 27 14:58:46.261: INFO: waiting for watch events with expected annotations +Oct 27 14:58:46.261: INFO: saw patched and updated annotations +STEP: deleting +STEP: deleting a collection +[AfterEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:58:46.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "endpointslice-4736" for this suite. +•{"msg":"PASSED [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","total":346,"completed":194,"skipped":3226,"failed":0} + +------------------------------ +[sig-apps] ReplicaSet + should serve a basic image on each replica with a public image [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:58:46.899: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename replicaset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replicaset-8112 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should serve a basic image on each replica with a public image [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:58:47.641: INFO: Creating ReplicaSet my-hostname-basic-c4e08ea1-c6f5-482f-be35-81ce99c20bb3 +Oct 27 14:58:47.822: INFO: Pod name my-hostname-basic-c4e08ea1-c6f5-482f-be35-81ce99c20bb3: Found 1 pods out of 1 +Oct 27 14:58:47.822: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-c4e08ea1-c6f5-482f-be35-81ce99c20bb3" is running +Oct 27 14:58:50.005: INFO: Pod "my-hostname-basic-c4e08ea1-c6f5-482f-be35-81ce99c20bb3-rq9t8" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-27 14:58:47 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-27 14:58:47 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-c4e08ea1-c6f5-482f-be35-81ce99c20bb3]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-27 14:58:47 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-c4e08ea1-c6f5-482f-be35-81ce99c20bb3]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-27 14:58:47 +0000 UTC Reason: Message:}]) +Oct 27 14:58:50.005: INFO: Trying to dial the pod +Oct 27 14:58:55.329: INFO: Controller my-hostname-basic-c4e08ea1-c6f5-482f-be35-81ce99c20bb3: Got expected result from replica 1 [my-hostname-basic-c4e08ea1-c6f5-482f-be35-81ce99c20bb3-rq9t8]: "my-hostname-basic-c4e08ea1-c6f5-482f-be35-81ce99c20bb3-rq9t8", 1 of 1 required successes so far +[AfterEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:58:55.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replicaset-8112" for this suite. +•{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":346,"completed":195,"skipped":3226,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-instrumentation] Events + should delete a collection of events [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-instrumentation] Events + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:58:55.600: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename events +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in events-9905 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should delete a collection of events [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Create set of events +Oct 27 14:58:56.424: INFO: created test-event-1 +Oct 27 14:58:56.514: INFO: created test-event-2 +Oct 27 14:58:56.605: INFO: created test-event-3 +STEP: get a list of Events with a label in the current namespace +STEP: delete collection of events +Oct 27 14:58:56.696: INFO: requesting DeleteCollection of events +STEP: check that the list of events matches the requested quantity +Oct 27 14:58:56.792: INFO: requesting list of events to confirm quantity +[AfterEach] [sig-instrumentation] Events + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:58:56.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "events-9905" for this suite. +•{"msg":"PASSED [sig-instrumentation] Events should delete a collection of events [Conformance]","total":346,"completed":196,"skipped":3239,"failed":0} +SS +------------------------------ +[sig-node] Docker Containers + should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Docker Containers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:58:57.065: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename containers +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in containers-9993 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test override arguments +Oct 27 14:58:57.892: INFO: Waiting up to 5m0s for pod "client-containers-8d2a14ef-8ad1-43a7-96c5-cc0bdfc16737" in namespace "containers-9993" to be "Succeeded or Failed" +Oct 27 14:58:57.982: INFO: Pod "client-containers-8d2a14ef-8ad1-43a7-96c5-cc0bdfc16737": Phase="Pending", Reason="", readiness=false. Elapsed: 90.435938ms +Oct 27 14:59:00.074: INFO: Pod "client-containers-8d2a14ef-8ad1-43a7-96c5-cc0bdfc16737": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.181627943s +STEP: Saw pod success +Oct 27 14:59:00.074: INFO: Pod "client-containers-8d2a14ef-8ad1-43a7-96c5-cc0bdfc16737" satisfied condition "Succeeded or Failed" +Oct 27 14:59:00.164: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod client-containers-8d2a14ef-8ad1-43a7-96c5-cc0bdfc16737 container agnhost-container: +STEP: delete the pod +Oct 27 14:59:00.396: INFO: Waiting for pod client-containers-8d2a14ef-8ad1-43a7-96c5-cc0bdfc16737 to disappear +Oct 27 14:59:00.486: INFO: Pod client-containers-8d2a14ef-8ad1-43a7-96c5-cc0bdfc16737 no longer exists +[AfterEach] [sig-node] Docker Containers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:59:00.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "containers-9993" for this suite. +•{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":346,"completed":197,"skipped":3241,"failed":0} +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Update Demo + should create and stop a replication controller [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:59:00.757: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-3842 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[BeforeEach] Update Demo + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:296 +[It] should create and stop a replication controller [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a replication controller +Oct 27 14:59:01.488: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-3842 create -f -' +Oct 27 14:59:02.006: INFO: stderr: "" +Oct 27 14:59:02.006: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" +STEP: waiting for all containers in name=update-demo pods to come up. +Oct 27 14:59:02.006: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-3842 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 27 14:59:02.424: INFO: stderr: "" +Oct 27 14:59:02.424: INFO: stdout: "update-demo-nautilus-9l7h4 update-demo-nautilus-f55bj " +Oct 27 14:59:02.424: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-3842 get pods update-demo-nautilus-9l7h4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 27 14:59:02.747: INFO: stderr: "" +Oct 27 14:59:02.747: INFO: stdout: "" +Oct 27 14:59:02.747: INFO: update-demo-nautilus-9l7h4 is created but not running +Oct 27 14:59:07.748: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-3842 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 27 14:59:08.080: INFO: stderr: "" +Oct 27 14:59:08.080: INFO: stdout: "update-demo-nautilus-9l7h4 update-demo-nautilus-f55bj " +Oct 27 14:59:08.080: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-3842 get pods update-demo-nautilus-9l7h4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 27 14:59:08.406: INFO: stderr: "" +Oct 27 14:59:08.406: INFO: stdout: "" +Oct 27 14:59:08.406: INFO: update-demo-nautilus-9l7h4 is created but not running +Oct 27 14:59:13.407: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-3842 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 27 14:59:13.822: INFO: stderr: "" +Oct 27 14:59:13.822: INFO: stdout: "update-demo-nautilus-9l7h4 update-demo-nautilus-f55bj " +Oct 27 14:59:13.822: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-3842 get pods update-demo-nautilus-9l7h4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 27 14:59:14.153: INFO: stderr: "" +Oct 27 14:59:14.153: INFO: stdout: "true" +Oct 27 14:59:14.153: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-3842 get pods update-demo-nautilus-9l7h4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Oct 27 14:59:14.486: INFO: stderr: "" +Oct 27 14:59:14.486: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" +Oct 27 14:59:14.486: INFO: validating pod update-demo-nautilus-9l7h4 +Oct 27 14:59:14.673: INFO: got data: { + "image": "nautilus.jpg" +} + +Oct 27 14:59:14.673: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Oct 27 14:59:14.673: INFO: update-demo-nautilus-9l7h4 is verified up and running +Oct 27 14:59:14.673: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-3842 get pods update-demo-nautilus-f55bj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 27 14:59:15.029: INFO: stderr: "" +Oct 27 14:59:15.029: INFO: stdout: "true" +Oct 27 14:59:15.029: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-3842 get pods update-demo-nautilus-f55bj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Oct 27 14:59:15.357: INFO: stderr: "" +Oct 27 14:59:15.358: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" +Oct 27 14:59:15.358: INFO: validating pod update-demo-nautilus-f55bj +Oct 27 14:59:15.541: INFO: got data: { + "image": "nautilus.jpg" +} + +Oct 27 14:59:15.541: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Oct 27 14:59:15.541: INFO: update-demo-nautilus-f55bj is verified up and running +STEP: using delete to clean up resources +Oct 27 14:59:15.541: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-3842 delete --grace-period=0 --force -f -' +Oct 27 14:59:15.956: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Oct 27 14:59:15.956: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" +Oct 27 14:59:15.956: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-3842 get rc,svc -l name=update-demo --no-headers' +Oct 27 14:59:16.898: INFO: stderr: "No resources found in kubectl-3842 namespace.\n" +Oct 27 14:59:16.898: INFO: stdout: "" +Oct 27 14:59:16.898: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-3842 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' +Oct 27 14:59:17.224: INFO: stderr: "" +Oct 27 14:59:17.224: INFO: stdout: "" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:59:17.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-3842" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":346,"completed":198,"skipped":3263,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Downward API + should provide pod UID as env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:59:17.494: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-6229 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide pod UID as env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward api env vars +Oct 27 14:59:18.325: INFO: Waiting up to 5m0s for pod "downward-api-1f88defc-d0ac-4270-ae1a-6df501d00161" in namespace "downward-api-6229" to be "Succeeded or Failed" +Oct 27 14:59:18.415: INFO: Pod "downward-api-1f88defc-d0ac-4270-ae1a-6df501d00161": Phase="Pending", Reason="", readiness=false. Elapsed: 90.110595ms +Oct 27 14:59:20.506: INFO: Pod "downward-api-1f88defc-d0ac-4270-ae1a-6df501d00161": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.181472287s +STEP: Saw pod success +Oct 27 14:59:20.506: INFO: Pod "downward-api-1f88defc-d0ac-4270-ae1a-6df501d00161" satisfied condition "Succeeded or Failed" +Oct 27 14:59:20.597: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod downward-api-1f88defc-d0ac-4270-ae1a-6df501d00161 container dapi-container: +STEP: delete the pod +Oct 27 14:59:20.787: INFO: Waiting for pod downward-api-1f88defc-d0ac-4270-ae1a-6df501d00161 to disappear +Oct 27 14:59:20.878: INFO: Pod downward-api-1f88defc-d0ac-4270-ae1a-6df501d00161 no longer exists +[AfterEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:59:20.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-6229" for this suite. +•{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":346,"completed":199,"skipped":3304,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:59:21.148: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename subpath +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in subpath-757 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] Atomic writer volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 +STEP: Setting up data +[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating pod pod-subpath-test-configmap-t4r8 +STEP: Creating a pod to test atomic-volume-subpath +Oct 27 14:59:22.158: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-t4r8" in namespace "subpath-757" to be "Succeeded or Failed" +Oct 27 14:59:22.248: INFO: Pod "pod-subpath-test-configmap-t4r8": Phase="Pending", Reason="", readiness=false. Elapsed: 90.10911ms +Oct 27 14:59:24.340: INFO: Pod "pod-subpath-test-configmap-t4r8": Phase="Running", Reason="", readiness=true. Elapsed: 2.18186399s +Oct 27 14:59:26.431: INFO: Pod "pod-subpath-test-configmap-t4r8": Phase="Running", Reason="", readiness=true. Elapsed: 4.273656304s +Oct 27 14:59:28.523: INFO: Pod "pod-subpath-test-configmap-t4r8": Phase="Running", Reason="", readiness=true. Elapsed: 6.365248748s +Oct 27 14:59:30.614: INFO: Pod "pod-subpath-test-configmap-t4r8": Phase="Running", Reason="", readiness=true. Elapsed: 8.456382949s +Oct 27 14:59:32.706: INFO: Pod "pod-subpath-test-configmap-t4r8": Phase="Running", Reason="", readiness=true. Elapsed: 10.547894686s +Oct 27 14:59:34.798: INFO: Pod "pod-subpath-test-configmap-t4r8": Phase="Running", Reason="", readiness=true. Elapsed: 12.639950003s +Oct 27 14:59:36.890: INFO: Pod "pod-subpath-test-configmap-t4r8": Phase="Running", Reason="", readiness=true. Elapsed: 14.732658229s +Oct 27 14:59:38.981: INFO: Pod "pod-subpath-test-configmap-t4r8": Phase="Running", Reason="", readiness=true. Elapsed: 16.823323304s +Oct 27 14:59:41.073: INFO: Pod "pod-subpath-test-configmap-t4r8": Phase="Running", Reason="", readiness=true. Elapsed: 18.915218982s +Oct 27 14:59:43.164: INFO: Pod "pod-subpath-test-configmap-t4r8": Phase="Running", Reason="", readiness=true. Elapsed: 21.006294699s +Oct 27 14:59:45.255: INFO: Pod "pod-subpath-test-configmap-t4r8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.097551929s +STEP: Saw pod success +Oct 27 14:59:45.255: INFO: Pod "pod-subpath-test-configmap-t4r8" satisfied condition "Succeeded or Failed" +Oct 27 14:59:45.346: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod pod-subpath-test-configmap-t4r8 container test-container-subpath-configmap-t4r8: +STEP: delete the pod +Oct 27 14:59:45.538: INFO: Waiting for pod pod-subpath-test-configmap-t4r8 to disappear +Oct 27 14:59:45.628: INFO: Pod pod-subpath-test-configmap-t4r8 no longer exists +STEP: Deleting pod pod-subpath-test-configmap-t4r8 +Oct 27 14:59:45.628: INFO: Deleting pod "pod-subpath-test-configmap-t4r8" in namespace "subpath-757" +[AfterEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:59:45.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "subpath-757" for this suite. +•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":346,"completed":200,"skipped":3318,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Variable Expansion + should allow composing env vars into new env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:59:45.988: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename var-expansion +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in var-expansion-3448 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should allow composing env vars into new env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test env composition +Oct 27 14:59:46.817: INFO: Waiting up to 5m0s for pod "var-expansion-200a4ee0-0850-43e4-a9b7-d716d84ae282" in namespace "var-expansion-3448" to be "Succeeded or Failed" +Oct 27 14:59:46.910: INFO: Pod "var-expansion-200a4ee0-0850-43e4-a9b7-d716d84ae282": Phase="Pending", Reason="", readiness=false. Elapsed: 92.76839ms +Oct 27 14:59:49.001: INFO: Pod "var-expansion-200a4ee0-0850-43e4-a9b7-d716d84ae282": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.184001206s +STEP: Saw pod success +Oct 27 14:59:49.001: INFO: Pod "var-expansion-200a4ee0-0850-43e4-a9b7-d716d84ae282" satisfied condition "Succeeded or Failed" +Oct 27 14:59:49.091: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod var-expansion-200a4ee0-0850-43e4-a9b7-d716d84ae282 container dapi-container: +STEP: delete the pod +Oct 27 14:59:49.280: INFO: Waiting for pod var-expansion-200a4ee0-0850-43e4-a9b7-d716d84ae282 to disappear +Oct 27 14:59:49.371: INFO: Pod var-expansion-200a4ee0-0850-43e4-a9b7-d716d84ae282 no longer exists +[AfterEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:59:49.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-3448" for this suite. +•{"msg":"PASSED [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":346,"completed":201,"skipped":3334,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-node] Secrets + should patch a secret [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:59:49.641: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-9074 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should patch a secret [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a secret +STEP: listing secrets in all namespaces to ensure that there are more than zero +STEP: patching the secret +STEP: deleting the secret using a LabelSelector +STEP: listing secrets in all namespaces, searching for label name and value in patch +[AfterEach] [sig-node] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:59:50.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-9074" for this suite. +•{"msg":"PASSED [sig-node] Secrets should patch a secret [Conformance]","total":346,"completed":202,"skipped":3344,"failed":0} +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Aggregator + Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Aggregator + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:59:51.104: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename aggregator +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in aggregator-7323 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] Aggregator + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:77 +Oct 27 14:59:51.836: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +[It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Registering the sample API server. +Oct 27 14:59:53.241: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943592, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943592, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943592, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943592, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 27 14:59:55.333: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943592, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943592, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943592, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943592, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 27 14:59:57.344: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943592, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943592, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943592, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943592, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 27 14:59:59.333: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943592, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943592, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943592, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943592, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 27 15:00:01.333: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943592, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943592, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943592, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943592, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 27 15:00:05.764: INFO: Waited 2.339151368s for the sample-apiserver to be ready to handle requests. +STEP: Read Status for v1alpha1.wardle.example.com +STEP: kubectl patch apiservice v1alpha1.wardle.example.com -p '{"spec":{"versionPriority": 400}}' +STEP: List APIServices +Oct 27 15:00:07.149: INFO: Found v1alpha1.wardle.example.com in APIServiceList +[AfterEach] [sig-api-machinery] Aggregator + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:68 +[AfterEach] [sig-api-machinery] Aggregator + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:00:09.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "aggregator-7323" for this suite. +•{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":346,"completed":203,"skipped":3364,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-auth] ServiceAccounts + ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:00:09.404: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename svcaccounts +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in svcaccounts-2850 +STEP: Waiting for a default service account to be provisioned in namespace +[It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:00:10.324: INFO: created pod +Oct 27 15:00:10.324: INFO: Waiting up to 5m0s for pod "oidc-discovery-validator" in namespace "svcaccounts-2850" to be "Succeeded or Failed" +Oct 27 15:00:10.414: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 90.217059ms +Oct 27 15:00:12.506: INFO: Pod "oidc-discovery-validator": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.181471148s +STEP: Saw pod success +Oct 27 15:00:12.506: INFO: Pod "oidc-discovery-validator" satisfied condition "Succeeded or Failed" +Oct 27 15:00:42.506: INFO: polling logs +Oct 27 15:00:42.715: INFO: Pod logs: +2021/10/27 15:00:11 OK: Got token +2021/10/27 15:00:11 validating with in-cluster discovery +2021/10/27 15:00:11 OK: got issuer https://api.tm94z-0j6.it.internal.staging.k8s.ondemand.com +2021/10/27 15:00:11 Full, not-validated claims: +openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://api.tm94z-0j6.it.internal.staging.k8s.ondemand.com", Subject:"system:serviceaccount:svcaccounts-2850:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1635347410, NotBefore:1635346810, IssuedAt:1635346810, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-2850", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"71a4f9f5-fae5-467a-89c6-765bc4c9774c"}}} +2021/10/27 15:00:11 OK: Constructed OIDC provider for issuer https://api.tm94z-0j6.it.internal.staging.k8s.ondemand.com +2021/10/27 15:00:11 OK: Validated signature on JWT +2021/10/27 15:00:11 OK: Got valid claims from token! +2021/10/27 15:00:11 Full, validated claims: +&openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://api.tm94z-0j6.it.internal.staging.k8s.ondemand.com", Subject:"system:serviceaccount:svcaccounts-2850:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1635347410, NotBefore:1635346810, IssuedAt:1635346810, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-2850", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"71a4f9f5-fae5-467a-89c6-765bc4c9774c"}}} + +Oct 27 15:00:42.715: INFO: completed pod +[AfterEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:00:42.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svcaccounts-2850" for this suite. +•{"msg":"PASSED [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","total":346,"completed":204,"skipped":3378,"failed":0} +SSS +------------------------------ +[sig-storage] Secrets + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:00:43.081: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-8072 +STEP: Waiting for a default service account to be provisioned in namespace +[It] optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating secret with name s-test-opt-del-fb5e053b-de36-47c3-bdc2-48b1afd8f30c +STEP: Creating secret with name s-test-opt-upd-a77021e0-8202-45b0-b75c-479f93be2302 +STEP: Creating the pod +Oct 27 15:00:44.281: INFO: The status of Pod pod-secrets-af32ca7c-3c8f-4794-a88d-90e796f8756c is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:00:46.373: INFO: The status of Pod pod-secrets-af32ca7c-3c8f-4794-a88d-90e796f8756c is Running (Ready = true) +STEP: Deleting secret s-test-opt-del-fb5e053b-de36-47c3-bdc2-48b1afd8f30c +STEP: Updating secret s-test-opt-upd-a77021e0-8202-45b0-b75c-479f93be2302 +STEP: Creating secret with name s-test-opt-create-955ff170-22a6-4518-b57e-1b7148ff122b +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:01:58.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-8072" for this suite. +•{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":205,"skipped":3381,"failed":0} +SSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a secret. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:01:59.032: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename resourcequota +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-6562 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should create a ResourceQuota and capture the life of a secret. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Discovering how many secrets are in namespace by default +STEP: Counting existing ResourceQuota +STEP: Creating a ResourceQuota +STEP: Ensuring resource quota status is calculated +STEP: Creating a Secret +STEP: Ensuring resource quota status captures secret creation +STEP: Deleting a secret +STEP: Ensuring resource quota status released usage +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:02:17.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-6562" for this suite. +•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":346,"completed":206,"skipped":3384,"failed":0} +SSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide container's memory request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:02:17.768: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-2269 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should provide container's memory request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 27 15:02:18.612: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3284f801-fd2a-4a85-b4d6-2ccfaf312017" in namespace "projected-2269" to be "Succeeded or Failed" +Oct 27 15:02:18.703: INFO: Pod "downwardapi-volume-3284f801-fd2a-4a85-b4d6-2ccfaf312017": Phase="Pending", Reason="", readiness=false. Elapsed: 91.081256ms +Oct 27 15:02:20.795: INFO: Pod "downwardapi-volume-3284f801-fd2a-4a85-b4d6-2ccfaf312017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.182609662s +STEP: Saw pod success +Oct 27 15:02:20.795: INFO: Pod "downwardapi-volume-3284f801-fd2a-4a85-b4d6-2ccfaf312017" satisfied condition "Succeeded or Failed" +Oct 27 15:02:20.886: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod downwardapi-volume-3284f801-fd2a-4a85-b4d6-2ccfaf312017 container client-container: +STEP: delete the pod +Oct 27 15:02:21.077: INFO: Waiting for pod downwardapi-volume-3284f801-fd2a-4a85-b4d6-2ccfaf312017 to disappear +Oct 27 15:02:21.167: INFO: Pod downwardapi-volume-3284f801-fd2a-4a85-b4d6-2ccfaf312017 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:02:21.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-2269" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":346,"completed":207,"skipped":3387,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Downward API + should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:02:21.439: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-858 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward api env vars +Oct 27 15:02:22.268: INFO: Waiting up to 5m0s for pod "downward-api-3cbcd199-9bf1-4ff8-9518-fca8965137f5" in namespace "downward-api-858" to be "Succeeded or Failed" +Oct 27 15:02:22.358: INFO: Pod "downward-api-3cbcd199-9bf1-4ff8-9518-fca8965137f5": Phase="Pending", Reason="", readiness=false. Elapsed: 90.255094ms +Oct 27 15:02:24.449: INFO: Pod "downward-api-3cbcd199-9bf1-4ff8-9518-fca8965137f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.181022538s +STEP: Saw pod success +Oct 27 15:02:24.449: INFO: Pod "downward-api-3cbcd199-9bf1-4ff8-9518-fca8965137f5" satisfied condition "Succeeded or Failed" +Oct 27 15:02:24.539: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod downward-api-3cbcd199-9bf1-4ff8-9518-fca8965137f5 container dapi-container: +STEP: delete the pod +Oct 27 15:02:24.732: INFO: Waiting for pod downward-api-3cbcd199-9bf1-4ff8-9518-fca8965137f5 to disappear +Oct 27 15:02:24.822: INFO: Pod downward-api-3cbcd199-9bf1-4ff8-9518-fca8965137f5 no longer exists +[AfterEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:02:24.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-858" for this suite. +•{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":346,"completed":208,"skipped":3437,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Secrets + should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:02:25.092: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-5331 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating secret with name secret-test-d4db2dd1-b9e8-465b-9739-e16ae23ca61d +STEP: Creating a pod to test consume secrets +Oct 27 15:02:26.014: INFO: Waiting up to 5m0s for pod "pod-secrets-276976db-494f-4ca0-8efd-9511d09bec83" in namespace "secrets-5331" to be "Succeeded or Failed" +Oct 27 15:02:26.104: INFO: Pod "pod-secrets-276976db-494f-4ca0-8efd-9511d09bec83": Phase="Pending", Reason="", readiness=false. Elapsed: 90.471505ms +Oct 27 15:02:28.195: INFO: Pod "pod-secrets-276976db-494f-4ca0-8efd-9511d09bec83": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.181643089s +STEP: Saw pod success +Oct 27 15:02:28.195: INFO: Pod "pod-secrets-276976db-494f-4ca0-8efd-9511d09bec83" satisfied condition "Succeeded or Failed" +Oct 27 15:02:28.285: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod pod-secrets-276976db-494f-4ca0-8efd-9511d09bec83 container secret-volume-test: +STEP: delete the pod +Oct 27 15:02:28.477: INFO: Waiting for pod pod-secrets-276976db-494f-4ca0-8efd-9511d09bec83 to disappear +Oct 27 15:02:28.567: INFO: Pod pod-secrets-276976db-494f-4ca0-8efd-9511d09bec83 no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:02:28.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-5331" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":346,"completed":209,"skipped":3464,"failed":0} +SS +------------------------------ +[sig-auth] ServiceAccounts + should allow opting out of API token automount [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:02:28.837: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename svcaccounts +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in svcaccounts-5028 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should allow opting out of API token automount [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: getting the auto-created API token +Oct 27 15:02:30.527: INFO: created pod pod-service-account-defaultsa +Oct 27 15:02:30.527: INFO: pod pod-service-account-defaultsa service account token volume mount: true +Oct 27 15:02:30.621: INFO: created pod pod-service-account-mountsa +Oct 27 15:02:30.621: INFO: pod pod-service-account-mountsa service account token volume mount: true +Oct 27 15:02:30.716: INFO: created pod pod-service-account-nomountsa +Oct 27 15:02:30.716: INFO: pod pod-service-account-nomountsa service account token volume mount: false +Oct 27 15:02:30.813: INFO: created pod pod-service-account-defaultsa-mountspec +Oct 27 15:02:30.813: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true +Oct 27 15:02:30.907: INFO: created pod pod-service-account-mountsa-mountspec +Oct 27 15:02:30.907: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true +Oct 27 15:02:31.001: INFO: created pod pod-service-account-nomountsa-mountspec +Oct 27 15:02:31.001: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true +Oct 27 15:02:31.095: INFO: created pod pod-service-account-defaultsa-nomountspec +Oct 27 15:02:31.095: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false +Oct 27 15:02:31.189: INFO: created pod pod-service-account-mountsa-nomountspec +Oct 27 15:02:31.189: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false +Oct 27 15:02:31.282: INFO: created pod pod-service-account-nomountsa-nomountspec +Oct 27 15:02:31.282: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false +[AfterEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:02:31.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svcaccounts-5028" for this suite. +•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":346,"completed":210,"skipped":3466,"failed":0} +SSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:02:31.552: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-9075 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating projection with secret that has name projected-secret-test-map-3e6fdb72-737a-47d0-bb07-5e576dd22eb9 +STEP: Creating a pod to test consume secrets +Oct 27 15:02:32.471: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-84a81497-500e-4c83-a9b3-ced75cfafbdf" in namespace "projected-9075" to be "Succeeded or Failed" +Oct 27 15:02:32.561: INFO: Pod "pod-projected-secrets-84a81497-500e-4c83-a9b3-ced75cfafbdf": Phase="Pending", Reason="", readiness=false. Elapsed: 90.338813ms +Oct 27 15:02:34.652: INFO: Pod "pod-projected-secrets-84a81497-500e-4c83-a9b3-ced75cfafbdf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.181497119s +STEP: Saw pod success +Oct 27 15:02:34.653: INFO: Pod "pod-projected-secrets-84a81497-500e-4c83-a9b3-ced75cfafbdf" satisfied condition "Succeeded or Failed" +Oct 27 15:02:34.743: INFO: Trying to get logs from node ip-10-250-9-48.ec2.internal pod pod-projected-secrets-84a81497-500e-4c83-a9b3-ced75cfafbdf container projected-secret-volume-test: +STEP: delete the pod +Oct 27 15:02:34.975: INFO: Waiting for pod pod-projected-secrets-84a81497-500e-4c83-a9b3-ced75cfafbdf to disappear +Oct 27 15:02:35.066: INFO: Pod pod-projected-secrets-84a81497-500e-4c83-a9b3-ced75cfafbdf no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:02:35.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-9075" for this suite. +•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":211,"skipped":3471,"failed":0} +SSSSSS +------------------------------ +[sig-scheduling] SchedulerPredicates [Serial] + validates that NodeSelector is respected if not matching [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:02:35.336: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename sched-pred +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-pred-5931 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 +Oct 27 15:02:36.069: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready +Oct 27 15:02:36.256: INFO: Waiting for terminating namespaces to be deleted... +Oct 27 15:02:36.346: INFO: +Logging pods the apiserver thinks is on node ip-10-250-28-25.ec2.internal before test +Oct 27 15:02:36.528: INFO: addons-nginx-ingress-controller-b7784495c-9bd2v from kube-system started at 2021-10-27 13:56:28 +0000 UTC (1 container statuses recorded) +Oct 27 15:02:36.529: INFO: Container nginx-ingress-controller ready: true, restart count 0 +Oct 27 15:02:36.529: INFO: apiserver-proxy-kb6fx from kube-system started at 2021-10-27 13:53:35 +0000 UTC (2 container statuses recorded) +Oct 27 15:02:36.529: INFO: Container proxy ready: true, restart count 0 +Oct 27 15:02:36.529: INFO: Container sidecar ready: true, restart count 0 +Oct 27 15:02:36.529: INFO: blackbox-exporter-65c549b94c-kw2mt from kube-system started at 2021-10-27 14:00:28 +0000 UTC (1 container statuses recorded) +Oct 27 15:02:36.529: INFO: Container blackbox-exporter ready: true, restart count 0 +Oct 27 15:02:36.529: INFO: calico-node-pqn8p from kube-system started at 2021-10-27 13:55:42 +0000 UTC (1 container statuses recorded) +Oct 27 15:02:36.529: INFO: Container calico-node ready: true, restart count 0 +Oct 27 15:02:36.529: INFO: csi-driver-node-ddm2w from kube-system started at 2021-10-27 13:53:35 +0000 UTC (3 container statuses recorded) +Oct 27 15:02:36.529: INFO: Container csi-driver ready: true, restart count 0 +Oct 27 15:02:36.529: INFO: Container csi-liveness-probe ready: true, restart count 0 +Oct 27 15:02:36.529: INFO: Container csi-node-driver-registrar ready: true, restart count 0 +Oct 27 15:02:36.529: INFO: kube-proxy-tnk6p from kube-system started at 2021-10-27 13:56:34 +0000 UTC (2 container statuses recorded) +Oct 27 15:02:36.529: INFO: Container conntrack-fix ready: true, restart count 0 +Oct 27 15:02:36.529: INFO: Container kube-proxy ready: true, restart count 0 +Oct 27 15:02:36.529: INFO: node-exporter-jhkvj from kube-system started at 2021-10-27 13:53:35 +0000 UTC (1 container statuses recorded) +Oct 27 15:02:36.529: INFO: Container node-exporter ready: true, restart count 0 +Oct 27 15:02:36.529: INFO: node-problem-detector-lscmn from kube-system started at 2021-10-27 14:20:29 +0000 UTC (1 container statuses recorded) +Oct 27 15:02:36.529: INFO: Container node-problem-detector ready: true, restart count 0 +Oct 27 15:02:36.529: INFO: pod-service-account-defaultsa from svcaccounts-5028 started at 2021-10-27 15:02:30 +0000 UTC (1 container statuses recorded) +Oct 27 15:02:36.529: INFO: Container token-test ready: true, restart count 0 +Oct 27 15:02:36.529: INFO: pod-service-account-defaultsa-mountspec from svcaccounts-5028 started at 2021-10-27 15:02:30 +0000 UTC (1 container statuses recorded) +Oct 27 15:02:36.529: INFO: Container token-test ready: true, restart count 0 +Oct 27 15:02:36.529: INFO: pod-service-account-defaultsa-nomountspec from svcaccounts-5028 started at 2021-10-27 15:02:31 +0000 UTC (1 container statuses recorded) +Oct 27 15:02:36.529: INFO: Container token-test ready: true, restart count 0 +Oct 27 15:02:36.529: INFO: pod-service-account-mountsa from svcaccounts-5028 started at 2021-10-27 15:02:30 +0000 UTC (1 container statuses recorded) +Oct 27 15:02:36.529: INFO: Container token-test ready: true, restart count 0 +Oct 27 15:02:36.529: INFO: pod-service-account-mountsa-mountspec from svcaccounts-5028 started at 2021-10-27 15:02:30 +0000 UTC (1 container statuses recorded) +Oct 27 15:02:36.529: INFO: Container token-test ready: true, restart count 0 +Oct 27 15:02:36.529: INFO: pod-service-account-nomountsa from svcaccounts-5028 started at 2021-10-27 15:02:30 +0000 UTC (1 container statuses recorded) +Oct 27 15:02:36.529: INFO: Container token-test ready: true, restart count 0 +Oct 27 15:02:36.529: INFO: pod-service-account-nomountsa-mountspec from svcaccounts-5028 started at 2021-10-27 15:02:30 +0000 UTC (1 container statuses recorded) +Oct 27 15:02:36.529: INFO: Container token-test ready: true, restart count 0 +Oct 27 15:02:36.529: INFO: pod-service-account-nomountsa-nomountspec from svcaccounts-5028 started at 2021-10-27 15:02:31 +0000 UTC (1 container statuses recorded) +Oct 27 15:02:36.529: INFO: Container token-test ready: true, restart count 0 +Oct 27 15:02:36.529: INFO: +Logging pods the apiserver thinks is on node ip-10-250-9-48.ec2.internal before test +Oct 27 15:02:36.623: INFO: addons-nginx-ingress-nginx-ingress-k8s-backend-56d9d84c8c-bnwpb from kube-system started at 2021-10-27 13:53:42 +0000 UTC (1 container statuses recorded) +Oct 27 15:02:36.623: INFO: Container nginx-ingress-nginx-ingress-k8s-backend ready: true, restart count 0 +Oct 27 15:02:36.623: INFO: apiserver-proxy-4k9m7 from kube-system started at 2021-10-27 13:53:22 +0000 UTC (2 container statuses recorded) +Oct 27 15:02:36.623: INFO: Container proxy ready: true, restart count 0 +Oct 27 15:02:36.623: INFO: Container sidecar ready: true, restart count 0 +Oct 27 15:02:36.623: INFO: calico-kube-controllers-56bcbfb5c5-nhtm5 from kube-system started at 2021-10-27 13:53:22 +0000 UTC (1 container statuses recorded) +Oct 27 15:02:36.623: INFO: Container calico-kube-controllers ready: true, restart count 0 +Oct 27 15:02:36.623: INFO: calico-node-pcdrk from kube-system started at 2021-10-27 13:55:32 +0000 UTC (1 container statuses recorded) +Oct 27 15:02:36.623: INFO: Container calico-node ready: true, restart count 0 +Oct 27 15:02:36.623: INFO: calico-node-vertical-autoscaler-785b5f968-89m6j from kube-system started at 2021-10-27 13:53:42 +0000 UTC (1 container statuses recorded) +Oct 27 15:02:36.623: INFO: Container autoscaler ready: true, restart count 0 +Oct 27 15:02:36.623: INFO: calico-typha-deploy-546b97d4b5-xrvqz from kube-system started at 2021-10-27 13:53:22 +0000 UTC (1 container statuses recorded) +Oct 27 15:02:36.623: INFO: Container calico-typha ready: true, restart count 0 +Oct 27 15:02:36.623: INFO: calico-typha-horizontal-autoscaler-5b58bb446c-gbzpp from kube-system started at 2021-10-27 13:53:42 +0000 UTC (1 container statuses recorded) +Oct 27 15:02:36.623: INFO: Container autoscaler ready: true, restart count 0 +Oct 27 15:02:36.623: INFO: calico-typha-vertical-autoscaler-5c9655cddd-wwsqk from kube-system started at 2021-10-27 13:53:42 +0000 UTC (1 container statuses recorded) +Oct 27 15:02:36.623: INFO: Container autoscaler ready: true, restart count 0 +Oct 27 15:02:36.623: INFO: coredns-746d4d76f8-nqpnh from kube-system started at 2021-10-27 13:53:42 +0000 UTC (1 container statuses recorded) +Oct 27 15:02:36.623: INFO: Container coredns ready: true, restart count 0 +Oct 27 15:02:36.623: INFO: coredns-746d4d76f8-zksdl from kube-system started at 2021-10-27 13:53:42 +0000 UTC (1 container statuses recorded) +Oct 27 15:02:36.623: INFO: Container coredns ready: true, restart count 0 +Oct 27 15:02:36.623: INFO: csi-driver-node-cwstr from kube-system started at 2021-10-27 13:53:22 +0000 UTC (3 container statuses recorded) +Oct 27 15:02:36.623: INFO: Container csi-driver ready: true, restart count 0 +Oct 27 15:02:36.623: INFO: Container csi-liveness-probe ready: true, restart count 0 +Oct 27 15:02:36.623: INFO: Container csi-node-driver-registrar ready: true, restart count 0 +Oct 27 15:02:36.623: INFO: kube-proxy-d8j27 from kube-system started at 2021-10-27 13:56:29 +0000 UTC (2 container statuses recorded) +Oct 27 15:02:36.623: INFO: Container conntrack-fix ready: true, restart count 0 +Oct 27 15:02:36.623: INFO: Container kube-proxy ready: true, restart count 0 +Oct 27 15:02:36.623: INFO: metrics-server-98f7b76bf-s6v4j from kube-system started at 2021-10-27 13:53:42 +0000 UTC (1 container statuses recorded) +Oct 27 15:02:36.623: INFO: Container metrics-server ready: true, restart count 0 +Oct 27 15:02:36.623: INFO: node-exporter-27q2j from kube-system started at 2021-10-27 13:53:22 +0000 UTC (1 container statuses recorded) +Oct 27 15:02:36.623: INFO: Container node-exporter ready: true, restart count 0 +Oct 27 15:02:36.623: INFO: node-problem-detector-66fvb from kube-system started at 2021-10-27 14:20:29 +0000 UTC (1 container statuses recorded) +Oct 27 15:02:36.623: INFO: Container node-problem-detector ready: true, restart count 0 +Oct 27 15:02:36.623: INFO: vpn-shoot-77846799c6-lvhrh from kube-system started at 2021-10-27 13:53:42 +0000 UTC (1 container statuses recorded) +Oct 27 15:02:36.623: INFO: Container vpn-shoot ready: true, restart count 0 +Oct 27 15:02:36.623: INFO: dashboard-metrics-scraper-7ccbfc448f-8vkgz from kubernetes-dashboard started at 2021-10-27 13:53:42 +0000 UTC (1 container statuses recorded) +Oct 27 15:02:36.623: INFO: Container dashboard-metrics-scraper ready: true, restart count 0 +Oct 27 15:02:36.623: INFO: kubernetes-dashboard-5484586d8f-2hskr from kubernetes-dashboard started at 2021-10-27 13:53:42 +0000 UTC (1 container statuses recorded) +Oct 27 15:02:36.623: INFO: Container kubernetes-dashboard ready: true, restart count 0 +Oct 27 15:02:36.623: INFO: pod-service-account-mountsa-nomountspec from svcaccounts-5028 started at 2021-10-27 15:02:31 +0000 UTC (1 container statuses recorded) +Oct 27 15:02:36.623: INFO: Container token-test ready: true, restart count 0 +[It] validates that NodeSelector is respected if not matching [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Trying to schedule Pod with nonempty NodeSelector. +STEP: Considering event: +Type = [Warning], Name = [restricted-pod.16b1eb6495b2f6e0], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match Pod's node affinity/selector.] +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:02:38.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-pred-5931" for this suite. +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 +•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":346,"completed":212,"skipped":3477,"failed":0} +SSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be immutable if `immutable` field is set [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:02:38.359: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-6550 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be immutable if `immutable` field is set [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:02:39.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-6550" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","total":346,"completed":213,"skipped":3485,"failed":0} +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicaSet + Replicaset should have a working scale subresource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:02:40.165: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename replicaset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replicaset-711 +STEP: Waiting for a default service account to be provisioned in namespace +[It] Replicaset should have a working scale subresource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating replica set "test-rs" that asks for more than the allowed pod quota +Oct 27 15:02:41.169: INFO: Pod name sample-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running +STEP: getting scale subresource +STEP: updating a scale subresource +STEP: verifying the replicaset Spec.Replicas was modified +STEP: Patch a scale subresource +[AfterEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:02:43.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replicaset-711" for this suite. +•{"msg":"PASSED [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]","total":346,"completed":214,"skipped":3507,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + patching/updating a validating webhook should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:02:44.075: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-6263 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 27 15:02:45.995: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943765, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943765, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943765, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943765, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 27 15:02:49.183: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] patching/updating a validating webhook should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a validating webhook configuration +STEP: Creating a configMap that does not comply to the validation webhook rules +STEP: Updating a validating webhook configuration's rules to not include the create operation +STEP: Creating a configMap that does not comply to the validation webhook rules +STEP: Patching a validating webhook configuration's rules to include the create operation +STEP: Creating a configMap that does not comply to the validation webhook rules +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:02:50.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-6263" for this suite. +STEP: Destroying namespace "webhook-6263-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":346,"completed":215,"skipped":3550,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + should have a working scale subresource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:02:51.058: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename statefulset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-416 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:92 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:107 +STEP: Creating service test in namespace statefulset-416 +[It] should have a working scale subresource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating statefulset ss in namespace statefulset-416 +Oct 27 15:02:52.064: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false +Oct 27 15:03:02.158: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: getting scale subresource +STEP: updating a scale subresource +STEP: verifying the statefulset Spec.Replicas was modified +STEP: Patch a scale subresource +STEP: verifying the statefulset Spec.Replicas was modified +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118 +Oct 27 15:03:02.796: INFO: Deleting all statefulset in ns statefulset-416 +Oct 27 15:03:02.886: INFO: Scaling statefulset ss to 0 +Oct 27 15:03:13.249: INFO: Waiting for statefulset status.replicas updated to 0 +Oct 27 15:03:13.339: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:03:13.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-416" for this suite. +•{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":346,"completed":216,"skipped":3566,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Probing container + with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:03:13.882: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-probe +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-1884 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 +[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:03:14.804: INFO: The status of Pod test-webserver-839b4a8d-66dc-4905-99c7-7f351d68965f is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:03:16.895: INFO: The status of Pod test-webserver-839b4a8d-66dc-4905-99c7-7f351d68965f is Running (Ready = false) +Oct 27 15:03:18.895: INFO: The status of Pod test-webserver-839b4a8d-66dc-4905-99c7-7f351d68965f is Running (Ready = false) +Oct 27 15:03:20.896: INFO: The status of Pod test-webserver-839b4a8d-66dc-4905-99c7-7f351d68965f is Running (Ready = false) +Oct 27 15:03:22.896: INFO: The status of Pod test-webserver-839b4a8d-66dc-4905-99c7-7f351d68965f is Running (Ready = false) +Oct 27 15:03:24.896: INFO: The status of Pod test-webserver-839b4a8d-66dc-4905-99c7-7f351d68965f is Running (Ready = false) +Oct 27 15:03:26.897: INFO: The status of Pod test-webserver-839b4a8d-66dc-4905-99c7-7f351d68965f is Running (Ready = false) +Oct 27 15:03:28.895: INFO: The status of Pod test-webserver-839b4a8d-66dc-4905-99c7-7f351d68965f is Running (Ready = false) +Oct 27 15:03:30.895: INFO: The status of Pod test-webserver-839b4a8d-66dc-4905-99c7-7f351d68965f is Running (Ready = false) +Oct 27 15:03:32.896: INFO: The status of Pod test-webserver-839b4a8d-66dc-4905-99c7-7f351d68965f is Running (Ready = false) +Oct 27 15:03:34.896: INFO: The status of Pod test-webserver-839b4a8d-66dc-4905-99c7-7f351d68965f is Running (Ready = true) +Oct 27 15:03:34.987: INFO: Container started at 2021-10-27 15:03:15 +0000 UTC, pod became ready at 2021-10-27 15:03:34 +0000 UTC +[AfterEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:03:34.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-1884" for this suite. +•{"msg":"PASSED [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":346,"completed":217,"skipped":3589,"failed":0} +SS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for CRD without validation schema [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:03:35.258: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-3441 +STEP: Waiting for a default service account to be provisioned in namespace +[It] works for CRD without validation schema [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:03:35.991: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: client-side validation (kubectl create and apply) allows request with any unknown properties +Oct 27 15:03:40.401: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-3441 --namespace=crd-publish-openapi-3441 create -f -' +Oct 27 15:03:41.802: INFO: stderr: "" +Oct 27 15:03:41.802: INFO: stdout: "e2e-test-crd-publish-openapi-4408-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" +Oct 27 15:03:41.802: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-3441 --namespace=crd-publish-openapi-3441 delete e2e-test-crd-publish-openapi-4408-crds test-cr' +Oct 27 15:03:42.226: INFO: stderr: "" +Oct 27 15:03:42.226: INFO: stdout: "e2e-test-crd-publish-openapi-4408-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" +Oct 27 15:03:42.226: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-3441 --namespace=crd-publish-openapi-3441 apply -f -' +Oct 27 15:03:42.929: INFO: stderr: "" +Oct 27 15:03:42.929: INFO: stdout: "e2e-test-crd-publish-openapi-4408-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" +Oct 27 15:03:42.929: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-3441 --namespace=crd-publish-openapi-3441 delete e2e-test-crd-publish-openapi-4408-crds test-cr' +Oct 27 15:03:43.344: INFO: stderr: "" +Oct 27 15:03:43.344: INFO: stdout: "e2e-test-crd-publish-openapi-4408-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" +STEP: kubectl explain works to explain CR without validation schema +Oct 27 15:03:43.344: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-3441 explain e2e-test-crd-publish-openapi-4408-crds' +Oct 27 15:03:43.761: INFO: stderr: "" +Oct 27 15:03:43.761: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-4408-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:03:48.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-3441" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":346,"completed":218,"skipped":3591,"failed":0} +SSSS +------------------------------ +[sig-network] HostPort + validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] HostPort + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:03:48.869: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename hostport +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in hostport-4742 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] HostPort + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/hostport.go:47 +[It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Trying to create a pod(pod1) with hostport 54323 and hostIP 127.0.0.1 and expect scheduled +Oct 27 15:03:49.880: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:03:51.972: INFO: The status of Pod pod1 is Running (Ready = true) +STEP: Trying to create another pod(pod2) with hostport 54323 but hostIP 10.250.28.25 on the node which pod1 resides and expect scheduled +Oct 27 15:03:52.201: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:03:54.293: INFO: The status of Pod pod2 is Running (Ready = true) +STEP: Trying to create a third pod(pod3) with hostport 54323, hostIP 10.250.28.25 but use UDP protocol on the node which pod2 resides +Oct 27 15:03:54.479: INFO: The status of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:03:56.571: INFO: The status of Pod pod3 is Running (Ready = false) +Oct 27 15:03:58.571: INFO: The status of Pod pod3 is Running (Ready = true) +Oct 27 15:03:58.756: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:04:00.847: INFO: The status of Pod e2e-host-exec is Running (Ready = true) +STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54323 +Oct 27 15:04:00.938: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 10.250.28.25 http://127.0.0.1:54323/hostname] Namespace:hostport-4742 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 15:04:00.938: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: checking connectivity from pod e2e-host-exec to serverIP: 10.250.28.25, port: 54323 +Oct 27 15:04:01.665: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://10.250.28.25:54323/hostname] Namespace:hostport-4742 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 15:04:01.665: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: checking connectivity from pod e2e-host-exec to serverIP: 10.250.28.25, port: 54323 UDP +Oct 27 15:04:02.375: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 10.250.28.25 54323] Namespace:hostport-4742 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 15:04:02.375: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +[AfterEach] [sig-network] HostPort + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:04:08.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "hostport-4742" for this suite. +•{"msg":"PASSED [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]","total":346,"completed":219,"skipped":3595,"failed":0} +S +------------------------------ +[sig-apps] DisruptionController + should observe PodDisruptionBudget status updated [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:04:08.398: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename disruption +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in disruption-4413 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 +[It] should observe PodDisruptionBudget status updated [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Waiting for the pdb to be processed +STEP: Waiting for all pods to be running +Oct 27 15:04:09.686: INFO: running pods: 0 < 3 +Oct 27 15:04:11.778: INFO: running pods: 0 < 3 +[AfterEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:04:13.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "disruption-4413" for this suite. +•{"msg":"PASSED [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]","total":346,"completed":220,"skipped":3596,"failed":0} +SSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl patch + should add annotations for pods in rc [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:04:14.139: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-3938 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should add annotations for pods in rc [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating Agnhost RC +Oct 27 15:04:14.870: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-3938 create -f -' +Oct 27 15:04:15.743: INFO: stderr: "" +Oct 27 15:04:15.743: INFO: stdout: "replicationcontroller/agnhost-primary created\n" +STEP: Waiting for Agnhost primary to start. +Oct 27 15:04:16.835: INFO: Selector matched 1 pods for map[app:agnhost] +Oct 27 15:04:16.835: INFO: Found 0 / 1 +Oct 27 15:04:17.835: INFO: Selector matched 1 pods for map[app:agnhost] +Oct 27 15:04:17.835: INFO: Found 1 / 1 +Oct 27 15:04:17.835: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 +STEP: patching all pods +Oct 27 15:04:17.925: INFO: Selector matched 1 pods for map[app:agnhost] +Oct 27 15:04:17.925: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +Oct 27 15:04:17.925: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-3938 patch pod agnhost-primary-64k2k -p {"metadata":{"annotations":{"x":"y"}}}' +Oct 27 15:04:18.352: INFO: stderr: "" +Oct 27 15:04:18.352: INFO: stdout: "pod/agnhost-primary-64k2k patched\n" +STEP: checking annotations +Oct 27 15:04:18.442: INFO: Selector matched 1 pods for map[app:agnhost] +Oct 27 15:04:18.442: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:04:18.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-3938" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":346,"completed":221,"skipped":3602,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] EndpointSlice + should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:04:18.715: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename endpointslice +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in endpointslice-6810 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 +[It] should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: referencing a single matching pod +STEP: referencing matching pods with named port +STEP: creating empty Endpoints and EndpointSlices for no matching Pods +STEP: recreating EndpointSlices after they've been deleted +Oct 27 15:04:41.087: INFO: EndpointSlice for Service endpointslice-6810/example-named-port not found +[AfterEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:04:51.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "endpointslice-6810" for this suite. +•{"msg":"PASSED [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","total":346,"completed":222,"skipped":3642,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Pods + should delete a collection of pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:04:51.541: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename pods +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-1670 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:188 +[It] should delete a collection of pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Create set of pods +Oct 27 15:04:52.370: INFO: created test-pod-1 +Oct 27 15:04:52.465: INFO: created test-pod-2 +Oct 27 15:04:52.559: INFO: created test-pod-3 +STEP: waiting for all 3 pods to be located +STEP: waiting for all pods to be deleted +Oct 27 15:04:52.837: INFO: Pod quantity 3 is different from expected quantity 0 +Oct 27 15:04:53.930: INFO: Pod quantity 3 is different from expected quantity 0 +Oct 27 15:04:54.929: INFO: Pod quantity 3 is different from expected quantity 0 +Oct 27 15:04:55.929: INFO: Pod quantity 3 is different from expected quantity 0 +[AfterEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:04:56.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-1670" for this suite. +•{"msg":"PASSED [sig-node] Pods should delete a collection of pods [Conformance]","total":346,"completed":223,"skipped":3666,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-node] Variable Expansion + should allow substituting values in a container's args [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:04:57.207: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename var-expansion +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in var-expansion-6869 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should allow substituting values in a container's args [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test substitution in container's args +Oct 27 15:04:58.036: INFO: Waiting up to 5m0s for pod "var-expansion-f904b9c8-3aa6-4056-a589-65d718692a4a" in namespace "var-expansion-6869" to be "Succeeded or Failed" +Oct 27 15:04:58.127: INFO: Pod "var-expansion-f904b9c8-3aa6-4056-a589-65d718692a4a": Phase="Pending", Reason="", readiness=false. Elapsed: 90.284362ms +Oct 27 15:05:00.219: INFO: Pod "var-expansion-f904b9c8-3aa6-4056-a589-65d718692a4a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.182206496s +STEP: Saw pod success +Oct 27 15:05:00.219: INFO: Pod "var-expansion-f904b9c8-3aa6-4056-a589-65d718692a4a" satisfied condition "Succeeded or Failed" +Oct 27 15:05:00.309: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod var-expansion-f904b9c8-3aa6-4056-a589-65d718692a4a container dapi-container: +STEP: delete the pod +Oct 27 15:05:00.538: INFO: Waiting for pod var-expansion-f904b9c8-3aa6-4056-a589-65d718692a4a to disappear +Oct 27 15:05:00.628: INFO: Pod var-expansion-f904b9c8-3aa6-4056-a589-65d718692a4a no longer exists +[AfterEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:05:00.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-6869" for this suite. +•{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":346,"completed":224,"skipped":3677,"failed":0} + +------------------------------ +[sig-scheduling] SchedulerPredicates [Serial] + validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:05:00.900: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename sched-pred +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-pred-291 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 +Oct 27 15:05:01.632: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready +Oct 27 15:05:01.813: INFO: Waiting for terminating namespaces to be deleted... +Oct 27 15:05:01.903: INFO: +Logging pods the apiserver thinks is on node ip-10-250-28-25.ec2.internal before test +Oct 27 15:05:02.085: INFO: addons-nginx-ingress-controller-b7784495c-9bd2v from kube-system started at 2021-10-27 13:56:28 +0000 UTC (1 container statuses recorded) +Oct 27 15:05:02.085: INFO: Container nginx-ingress-controller ready: true, restart count 0 +Oct 27 15:05:02.085: INFO: apiserver-proxy-kb6fx from kube-system started at 2021-10-27 13:53:35 +0000 UTC (2 container statuses recorded) +Oct 27 15:05:02.085: INFO: Container proxy ready: true, restart count 0 +Oct 27 15:05:02.085: INFO: Container sidecar ready: true, restart count 0 +Oct 27 15:05:02.085: INFO: blackbox-exporter-65c549b94c-kw2mt from kube-system started at 2021-10-27 14:00:28 +0000 UTC (1 container statuses recorded) +Oct 27 15:05:02.085: INFO: Container blackbox-exporter ready: true, restart count 0 +Oct 27 15:05:02.085: INFO: calico-node-pqn8p from kube-system started at 2021-10-27 13:55:42 +0000 UTC (1 container statuses recorded) +Oct 27 15:05:02.085: INFO: Container calico-node ready: true, restart count 0 +Oct 27 15:05:02.085: INFO: csi-driver-node-ddm2w from kube-system started at 2021-10-27 13:53:35 +0000 UTC (3 container statuses recorded) +Oct 27 15:05:02.085: INFO: Container csi-driver ready: true, restart count 0 +Oct 27 15:05:02.085: INFO: Container csi-liveness-probe ready: true, restart count 0 +Oct 27 15:05:02.085: INFO: Container csi-node-driver-registrar ready: true, restart count 0 +Oct 27 15:05:02.085: INFO: kube-proxy-tnk6p from kube-system started at 2021-10-27 13:56:34 +0000 UTC (2 container statuses recorded) +Oct 27 15:05:02.085: INFO: Container conntrack-fix ready: true, restart count 0 +Oct 27 15:05:02.085: INFO: Container kube-proxy ready: true, restart count 0 +Oct 27 15:05:02.085: INFO: node-exporter-jhkvj from kube-system started at 2021-10-27 13:53:35 +0000 UTC (1 container statuses recorded) +Oct 27 15:05:02.085: INFO: Container node-exporter ready: true, restart count 0 +Oct 27 15:05:02.085: INFO: node-problem-detector-lscmn from kube-system started at 2021-10-27 14:20:29 +0000 UTC (1 container statuses recorded) +Oct 27 15:05:02.085: INFO: Container node-problem-detector ready: true, restart count 0 +Oct 27 15:05:02.085: INFO: +Logging pods the apiserver thinks is on node ip-10-250-9-48.ec2.internal before test +Oct 27 15:05:02.179: INFO: addons-nginx-ingress-nginx-ingress-k8s-backend-56d9d84c8c-bnwpb from kube-system started at 2021-10-27 13:53:42 +0000 UTC (1 container statuses recorded) +Oct 27 15:05:02.180: INFO: Container nginx-ingress-nginx-ingress-k8s-backend ready: true, restart count 0 +Oct 27 15:05:02.180: INFO: apiserver-proxy-4k9m7 from kube-system started at 2021-10-27 13:53:22 +0000 UTC (2 container statuses recorded) +Oct 27 15:05:02.180: INFO: Container proxy ready: true, restart count 0 +Oct 27 15:05:02.180: INFO: Container sidecar ready: true, restart count 0 +Oct 27 15:05:02.180: INFO: calico-kube-controllers-56bcbfb5c5-nhtm5 from kube-system started at 2021-10-27 13:53:22 +0000 UTC (1 container statuses recorded) +Oct 27 15:05:02.180: INFO: Container calico-kube-controllers ready: true, restart count 0 +Oct 27 15:05:02.180: INFO: calico-node-pcdrk from kube-system started at 2021-10-27 13:55:32 +0000 UTC (1 container statuses recorded) +Oct 27 15:05:02.180: INFO: Container calico-node ready: true, restart count 0 +Oct 27 15:05:02.180: INFO: calico-node-vertical-autoscaler-785b5f968-89m6j from kube-system started at 2021-10-27 13:53:42 +0000 UTC (1 container statuses recorded) +Oct 27 15:05:02.180: INFO: Container autoscaler ready: true, restart count 0 +Oct 27 15:05:02.180: INFO: calico-typha-deploy-546b97d4b5-xrvqz from kube-system started at 2021-10-27 13:53:22 +0000 UTC (1 container statuses recorded) +Oct 27 15:05:02.180: INFO: Container calico-typha ready: true, restart count 0 +Oct 27 15:05:02.180: INFO: calico-typha-horizontal-autoscaler-5b58bb446c-gbzpp from kube-system started at 2021-10-27 13:53:42 +0000 UTC (1 container statuses recorded) +Oct 27 15:05:02.180: INFO: Container autoscaler ready: true, restart count 0 +Oct 27 15:05:02.180: INFO: calico-typha-vertical-autoscaler-5c9655cddd-wwsqk from kube-system started at 2021-10-27 13:53:42 +0000 UTC (1 container statuses recorded) +Oct 27 15:05:02.180: INFO: Container autoscaler ready: true, restart count 0 +Oct 27 15:05:02.180: INFO: coredns-746d4d76f8-nqpnh from kube-system started at 2021-10-27 13:53:42 +0000 UTC (1 container statuses recorded) +Oct 27 15:05:02.180: INFO: Container coredns ready: true, restart count 0 +Oct 27 15:05:02.180: INFO: coredns-746d4d76f8-zksdl from kube-system started at 2021-10-27 13:53:42 +0000 UTC (1 container statuses recorded) +Oct 27 15:05:02.180: INFO: Container coredns ready: true, restart count 0 +Oct 27 15:05:02.180: INFO: csi-driver-node-cwstr from kube-system started at 2021-10-27 13:53:22 +0000 UTC (3 container statuses recorded) +Oct 27 15:05:02.180: INFO: Container csi-driver ready: true, restart count 0 +Oct 27 15:05:02.180: INFO: Container csi-liveness-probe ready: true, restart count 0 +Oct 27 15:05:02.180: INFO: Container csi-node-driver-registrar ready: true, restart count 0 +Oct 27 15:05:02.180: INFO: kube-proxy-d8j27 from kube-system started at 2021-10-27 13:56:29 +0000 UTC (2 container statuses recorded) +Oct 27 15:05:02.180: INFO: Container conntrack-fix ready: true, restart count 0 +Oct 27 15:05:02.180: INFO: Container kube-proxy ready: true, restart count 0 +Oct 27 15:05:02.180: INFO: metrics-server-98f7b76bf-s6v4j from kube-system started at 2021-10-27 13:53:42 +0000 UTC (1 container statuses recorded) +Oct 27 15:05:02.180: INFO: Container metrics-server ready: true, restart count 0 +Oct 27 15:05:02.180: INFO: node-exporter-27q2j from kube-system started at 2021-10-27 13:53:22 +0000 UTC (1 container statuses recorded) +Oct 27 15:05:02.180: INFO: Container node-exporter ready: true, restart count 0 +Oct 27 15:05:02.180: INFO: node-problem-detector-66fvb from kube-system started at 2021-10-27 14:20:29 +0000 UTC (1 container statuses recorded) +Oct 27 15:05:02.180: INFO: Container node-problem-detector ready: true, restart count 0 +Oct 27 15:05:02.180: INFO: vpn-shoot-77846799c6-lvhrh from kube-system started at 2021-10-27 13:53:42 +0000 UTC (1 container statuses recorded) +Oct 27 15:05:02.180: INFO: Container vpn-shoot ready: true, restart count 0 +Oct 27 15:05:02.180: INFO: dashboard-metrics-scraper-7ccbfc448f-8vkgz from kubernetes-dashboard started at 2021-10-27 13:53:42 +0000 UTC (1 container statuses recorded) +Oct 27 15:05:02.180: INFO: Container dashboard-metrics-scraper ready: true, restart count 0 +Oct 27 15:05:02.180: INFO: kubernetes-dashboard-5484586d8f-2hskr from kubernetes-dashboard started at 2021-10-27 13:53:42 +0000 UTC (1 container statuses recorded) +Oct 27 15:05:02.180: INFO: Container kubernetes-dashboard ready: true, restart count 0 +[It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Trying to launch a pod without a label to get a node which can launch it. +STEP: Explicitly delete pod here to free the resource it takes. +STEP: Trying to apply a random label on the found node. +STEP: verifying the node has the label kubernetes.io/e2e-1999977b-6452-486a-9326-5d477208741d 95 +STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled +STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 10.250.28.25 on the node which pod4 resides and expect not scheduled +STEP: removing the label kubernetes.io/e2e-1999977b-6452-486a-9326-5d477208741d off the node ip-10-250-28-25.ec2.internal +STEP: verifying the node doesn't have the label kubernetes.io/e2e-1999977b-6452-486a-9326-5d477208741d +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:10:08.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-pred-291" for this suite. +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 + +• [SLOW TEST:307.293 seconds] +[sig-scheduling] SchedulerPredicates [Serial] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 + validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":346,"completed":225,"skipped":3677,"failed":0} +SSSS +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:10:08.193: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-1519 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating secret with name secret-test-031da407-44bc-4f33-9a03-f54546e039f3 +STEP: Creating a pod to test consume secrets +Oct 27 15:10:09.119: INFO: Waiting up to 5m0s for pod "pod-secrets-cc1661d4-642a-4d77-930c-fec8d2dd58dc" in namespace "secrets-1519" to be "Succeeded or Failed" +Oct 27 15:10:09.210: INFO: Pod "pod-secrets-cc1661d4-642a-4d77-930c-fec8d2dd58dc": Phase="Pending", Reason="", readiness=false. Elapsed: 90.334518ms +Oct 27 15:10:11.301: INFO: Pod "pod-secrets-cc1661d4-642a-4d77-930c-fec8d2dd58dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.181387551s +STEP: Saw pod success +Oct 27 15:10:11.301: INFO: Pod "pod-secrets-cc1661d4-642a-4d77-930c-fec8d2dd58dc" satisfied condition "Succeeded or Failed" +Oct 27 15:10:11.391: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod pod-secrets-cc1661d4-642a-4d77-930c-fec8d2dd58dc container secret-volume-test: +STEP: delete the pod +Oct 27 15:10:11.584: INFO: Waiting for pod pod-secrets-cc1661d4-642a-4d77-930c-fec8d2dd58dc to disappear +Oct 27 15:10:11.674: INFO: Pod pod-secrets-cc1661d4-642a-4d77-930c-fec8d2dd58dc no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:10:11.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-1519" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":346,"completed":226,"skipped":3681,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:10:11.945: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-9188 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 27 15:10:12.808: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5c88b918-9a6e-4a20-9769-2bdd105cced1" in namespace "projected-9188" to be "Succeeded or Failed" +Oct 27 15:10:12.898: INFO: Pod "downwardapi-volume-5c88b918-9a6e-4a20-9769-2bdd105cced1": Phase="Pending", Reason="", readiness=false. Elapsed: 90.168494ms +Oct 27 15:10:14.989: INFO: Pod "downwardapi-volume-5c88b918-9a6e-4a20-9769-2bdd105cced1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.181067021s +STEP: Saw pod success +Oct 27 15:10:14.989: INFO: Pod "downwardapi-volume-5c88b918-9a6e-4a20-9769-2bdd105cced1" satisfied condition "Succeeded or Failed" +Oct 27 15:10:15.079: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod downwardapi-volume-5c88b918-9a6e-4a20-9769-2bdd105cced1 container client-container: +STEP: delete the pod +Oct 27 15:10:15.311: INFO: Waiting for pod downwardapi-volume-5c88b918-9a6e-4a20-9769-2bdd105cced1 to disappear +Oct 27 15:10:15.402: INFO: Pod downwardapi-volume-5c88b918-9a6e-4a20-9769-2bdd105cced1 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:10:15.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-9188" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":346,"completed":227,"skipped":3710,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:10:15.672: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-2283 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name configmap-test-volume-map-c7ddf756-e1ec-4b77-9b9f-629317db95a1 +STEP: Creating a pod to test consume configMaps +Oct 27 15:10:16.591: INFO: Waiting up to 5m0s for pod "pod-configmaps-8a6a1e12-4c49-49a1-99dc-bc3bc47138fa" in namespace "configmap-2283" to be "Succeeded or Failed" +Oct 27 15:10:16.682: INFO: Pod "pod-configmaps-8a6a1e12-4c49-49a1-99dc-bc3bc47138fa": Phase="Pending", Reason="", readiness=false. Elapsed: 90.439544ms +Oct 27 15:10:18.773: INFO: Pod "pod-configmaps-8a6a1e12-4c49-49a1-99dc-bc3bc47138fa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.181489249s +STEP: Saw pod success +Oct 27 15:10:18.773: INFO: Pod "pod-configmaps-8a6a1e12-4c49-49a1-99dc-bc3bc47138fa" satisfied condition "Succeeded or Failed" +Oct 27 15:10:18.863: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod pod-configmaps-8a6a1e12-4c49-49a1-99dc-bc3bc47138fa container agnhost-container: +STEP: delete the pod +Oct 27 15:10:19.057: INFO: Waiting for pod pod-configmaps-8a6a1e12-4c49-49a1-99dc-bc3bc47138fa to disappear +Oct 27 15:10:19.147: INFO: Pod pod-configmaps-8a6a1e12-4c49-49a1-99dc-bc3bc47138fa no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:10:19.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-2283" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":228,"skipped":3740,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-node] Secrets + should be consumable from pods in env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:10:19.417: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-4732 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating secret with name secret-test-8725dca7-fd85-49bd-8621-c88f62b54f52 +STEP: Creating a pod to test consume secrets +Oct 27 15:10:20.336: INFO: Waiting up to 5m0s for pod "pod-secrets-9e1737cf-105d-492e-b4a7-459975aa74cb" in namespace "secrets-4732" to be "Succeeded or Failed" +Oct 27 15:10:20.427: INFO: Pod "pod-secrets-9e1737cf-105d-492e-b4a7-459975aa74cb": Phase="Pending", Reason="", readiness=false. Elapsed: 90.24544ms +Oct 27 15:10:22.518: INFO: Pod "pod-secrets-9e1737cf-105d-492e-b4a7-459975aa74cb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.181091955s +STEP: Saw pod success +Oct 27 15:10:22.518: INFO: Pod "pod-secrets-9e1737cf-105d-492e-b4a7-459975aa74cb" satisfied condition "Succeeded or Failed" +Oct 27 15:10:22.608: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod pod-secrets-9e1737cf-105d-492e-b4a7-459975aa74cb container secret-env-test: +STEP: delete the pod +Oct 27 15:10:22.801: INFO: Waiting for pod pod-secrets-9e1737cf-105d-492e-b4a7-459975aa74cb to disappear +Oct 27 15:10:22.891: INFO: Pod pod-secrets-9e1737cf-105d-492e-b4a7-459975aa74cb no longer exists +[AfterEach] [sig-node] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:10:22.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-4732" for this suite. +•{"msg":"PASSED [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":346,"completed":229,"skipped":3752,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:10:23.162: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-5309 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating secret with name projected-secret-test-89748d9e-b260-4f5b-940d-c7ab91979f7c +STEP: Creating a pod to test consume secrets +Oct 27 15:10:24.081: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-fd60a8dc-eb5e-4d1f-a6a3-40958274ca90" in namespace "projected-5309" to be "Succeeded or Failed" +Oct 27 15:10:24.171: INFO: Pod "pod-projected-secrets-fd60a8dc-eb5e-4d1f-a6a3-40958274ca90": Phase="Pending", Reason="", readiness=false. Elapsed: 90.25878ms +Oct 27 15:10:26.262: INFO: Pod "pod-projected-secrets-fd60a8dc-eb5e-4d1f-a6a3-40958274ca90": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.181313578s +STEP: Saw pod success +Oct 27 15:10:26.262: INFO: Pod "pod-projected-secrets-fd60a8dc-eb5e-4d1f-a6a3-40958274ca90" satisfied condition "Succeeded or Failed" +Oct 27 15:10:26.352: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod pod-projected-secrets-fd60a8dc-eb5e-4d1f-a6a3-40958274ca90 container secret-volume-test: +STEP: delete the pod +Oct 27 15:10:26.544: INFO: Waiting for pod pod-projected-secrets-fd60a8dc-eb5e-4d1f-a6a3-40958274ca90 to disappear +Oct 27 15:10:26.634: INFO: Pod pod-projected-secrets-fd60a8dc-eb5e-4d1f-a6a3-40958274ca90 no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:10:26.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-5309" for this suite. +•{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":346,"completed":230,"skipped":3763,"failed":0} + +------------------------------ +[sig-apps] Daemon set [Serial] + should retry creating failed daemon pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:10:26.904: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename daemonsets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in daemonsets-8792 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:142 +[It] should retry creating failed daemon pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a simple DaemonSet "daemon-set" +STEP: Check that daemon pods launch on every node of the cluster. +Oct 27 15:10:28.269: INFO: Number of nodes with available pods: 0 +Oct 27 15:10:28.269: INFO: Node ip-10-250-28-25.ec2.internal is running more than one daemon pod +Oct 27 15:10:29.539: INFO: Number of nodes with available pods: 2 +Oct 27 15:10:29.539: INFO: Number of running nodes: 2, number of available pods: 2 +STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. +Oct 27 15:10:29.996: INFO: Number of nodes with available pods: 1 +Oct 27 15:10:29.996: INFO: Node ip-10-250-9-48.ec2.internal is running more than one daemon pod +Oct 27 15:10:31.268: INFO: Number of nodes with available pods: 1 +Oct 27 15:10:31.269: INFO: Node ip-10-250-9-48.ec2.internal is running more than one daemon pod +Oct 27 15:10:32.266: INFO: Number of nodes with available pods: 2 +Oct 27 15:10:32.266: INFO: Number of running nodes: 2, number of available pods: 2 +STEP: Wait for the failed daemon pod to be completely deleted. +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:108 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8792, will wait for the garbage collector to delete the pods +Oct 27 15:10:32.729: INFO: Deleting DaemonSet.extensions daemon-set took: 91.915583ms +Oct 27 15:10:32.830: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.634564ms +Oct 27 15:10:35.821: INFO: Number of nodes with available pods: 0 +Oct 27 15:10:35.821: INFO: Number of running nodes: 0, number of available pods: 0 +Oct 27 15:10:35.911: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"32020"},"items":null} + +Oct 27 15:10:36.001: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"32021"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:10:36.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-8792" for this suite. +•{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":346,"completed":231,"skipped":3763,"failed":0} + +------------------------------ +[sig-node] Variable Expansion + should succeed in writing subpaths in container [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:10:36.456: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename var-expansion +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in var-expansion-9914 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should succeed in writing subpaths in container [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pod +STEP: waiting for pod running +STEP: creating a file in subpath +Oct 27 15:10:39.559: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-9914 PodName:var-expansion-6a89db10-8058-43ce-84d6-bc8e1cbf7d03 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 15:10:39.559: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: test for file in mounted path +Oct 27 15:10:40.345: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-9914 PodName:var-expansion-6a89db10-8058-43ce-84d6-bc8e1cbf7d03 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 15:10:40.345: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: updating the annotation value +Oct 27 15:10:41.697: INFO: Successfully updated pod "var-expansion-6a89db10-8058-43ce-84d6-bc8e1cbf7d03" +STEP: waiting for annotated pod running +STEP: deleting the pod gracefully +Oct 27 15:10:41.788: INFO: Deleting pod "var-expansion-6a89db10-8058-43ce-84d6-bc8e1cbf7d03" in namespace "var-expansion-9914" +Oct 27 15:10:41.879: INFO: Wait up to 5m0s for pod "var-expansion-6a89db10-8058-43ce-84d6-bc8e1cbf7d03" to be fully deleted +[AfterEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:11:14.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-9914" for this suite. +•{"msg":"PASSED [sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","total":346,"completed":232,"skipped":3763,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should be able to change the type from ExternalName to NodePort [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:11:14.331: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-7713 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should be able to change the type from ExternalName to NodePort [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a service externalname-service with the type=ExternalName in namespace services-7713 +STEP: changing the ExternalName service to type=NodePort +STEP: creating replication controller externalname-service in namespace services-7713 +I1027 15:11:15.432841 5725 runners.go:190] Created replication controller with name: externalname-service, namespace: services-7713, replica count: 2 +Oct 27 15:11:18.534: INFO: Creating new exec pod +I1027 15:11:18.534304 5725 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Oct 27 15:11:21.990: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-7713 exec execpod8mksl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' +Oct 27 15:11:23.095: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" +Oct 27 15:11:23.095: INFO: stdout: "" +Oct 27 15:11:24.095: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-7713 exec execpod8mksl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' +Oct 27 15:11:25.124: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" +Oct 27 15:11:25.124: INFO: stdout: "externalname-service-pzhrl" +Oct 27 15:11:25.125: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-7713 exec execpod8mksl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.64.16.144 80' +Oct 27 15:11:26.177: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 100.64.16.144 80\nConnection to 100.64.16.144 80 port [tcp/http] succeeded!\n" +Oct 27 15:11:26.177: INFO: stdout: "externalname-service-lv7dx" +Oct 27 15:11:26.177: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-7713 exec execpod8mksl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.250.28.25 31523' +Oct 27 15:11:27.240: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.250.28.25 31523\nConnection to 10.250.28.25 31523 port [tcp/*] succeeded!\n" +Oct 27 15:11:27.240: INFO: stdout: "" +Oct 27 15:11:28.241: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-7713 exec execpod8mksl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.250.28.25 31523' +Oct 27 15:11:29.282: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.250.28.25 31523\nConnection to 10.250.28.25 31523 port [tcp/*] succeeded!\n" +Oct 27 15:11:29.282: INFO: stdout: "externalname-service-pzhrl" +Oct 27 15:11:29.282: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-7713 exec execpod8mksl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.250.9.48 31523' +Oct 27 15:11:30.480: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.250.9.48 31523\nConnection to 10.250.9.48 31523 port [tcp/*] succeeded!\n" +Oct 27 15:11:30.480: INFO: stdout: "" +Oct 27 15:11:31.481: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-7713 exec execpod8mksl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.250.9.48 31523' +Oct 27 15:11:32.546: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.250.9.48 31523\nConnection to 10.250.9.48 31523 port [tcp/*] succeeded!\n" +Oct 27 15:11:32.546: INFO: stdout: "externalname-service-pzhrl" +Oct 27 15:11:32.546: INFO: Cleaning up the ExternalName to NodePort test service +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:11:32.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-7713" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":346,"completed":233,"skipped":3777,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:11:32.915: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-1981 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 27 15:11:33.744: INFO: Waiting up to 5m0s for pod "downwardapi-volume-43847bbb-ae90-4f92-94b5-4f538d18fa53" in namespace "downward-api-1981" to be "Succeeded or Failed" +Oct 27 15:11:33.834: INFO: Pod "downwardapi-volume-43847bbb-ae90-4f92-94b5-4f538d18fa53": Phase="Pending", Reason="", readiness=false. Elapsed: 90.366356ms +Oct 27 15:11:35.926: INFO: Pod "downwardapi-volume-43847bbb-ae90-4f92-94b5-4f538d18fa53": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.181578106s +STEP: Saw pod success +Oct 27 15:11:35.926: INFO: Pod "downwardapi-volume-43847bbb-ae90-4f92-94b5-4f538d18fa53" satisfied condition "Succeeded or Failed" +Oct 27 15:11:36.016: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod downwardapi-volume-43847bbb-ae90-4f92-94b5-4f538d18fa53 container client-container: +STEP: delete the pod +Oct 27 15:11:36.228: INFO: Waiting for pod downwardapi-volume-43847bbb-ae90-4f92-94b5-4f538d18fa53 to disappear +Oct 27 15:11:36.319: INFO: Pod downwardapi-volume-43847bbb-ae90-4f92-94b5-4f538d18fa53 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:11:36.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-1981" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":234,"skipped":3793,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should mutate configmap [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:11:36.590: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-5290 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 27 15:11:38.217: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944297, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944297, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944297, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944297, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 27 15:11:41.403: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should mutate configmap [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Registering the mutating configmap webhook via the AdmissionRegistration API +STEP: create a configmap that should be updated by the webhook +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:11:42.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-5290" for this suite. +STEP: Destroying namespace "webhook-5290-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":346,"completed":235,"skipped":3844,"failed":0} +SSSSSS +------------------------------ +[sig-network] Services + should be able to change the type from ExternalName to ClusterIP [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:11:42.750: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-7852 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should be able to change the type from ExternalName to ClusterIP [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a service externalname-service with the type=ExternalName in namespace services-7852 +STEP: changing the ExternalName service to type=ClusterIP +STEP: creating replication controller externalname-service in namespace services-7852 +I1027 15:11:43.888754 5725 runners.go:190] Created replication controller with name: externalname-service, namespace: services-7852, replica count: 2 +Oct 27 15:11:46.990: INFO: Creating new exec pod +I1027 15:11:46.990862 5725 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Oct 27 15:11:50.267: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-7852 exec execpoddk8rf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' +Oct 27 15:11:51.291: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" +Oct 27 15:11:51.291: INFO: stdout: "externalname-service-4nj6d" +Oct 27 15:11:51.292: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-7852 exec execpoddk8rf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.71.174.29 80' +Oct 27 15:11:52.321: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 100.71.174.29 80\nConnection to 100.71.174.29 80 port [tcp/http] succeeded!\n" +Oct 27 15:11:52.321: INFO: stdout: "" +Oct 27 15:11:53.321: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-7852 exec execpoddk8rf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.71.174.29 80' +Oct 27 15:11:54.340: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 100.71.174.29 80\nConnection to 100.71.174.29 80 port [tcp/http] succeeded!\n" +Oct 27 15:11:54.340: INFO: stdout: "externalname-service-4nj6d" +Oct 27 15:11:54.340: INFO: Cleaning up the ExternalName to ClusterIP test service +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:11:54.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-7852" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":346,"completed":236,"skipped":3850,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] ConfigMap + should fail to create ConfigMap with empty key [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:11:54.706: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-1565 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should fail to create ConfigMap with empty key [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap that has name configmap-test-emptyKey-b94b2d56-c572-4d9f-9fd3-e37ebf0aee72 +[AfterEach] [sig-node] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:11:55.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-1565" for this suite. +•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":346,"completed":237,"skipped":3873,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Pods + should run through the lifecycle of Pods and PodStatus [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:11:55.710: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename pods +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-9971 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:188 +[It] should run through the lifecycle of Pods and PodStatus [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a Pod with a static label +STEP: watching for Pod to be ready +Oct 27 15:11:56.721: INFO: observed Pod pod-test in namespace pods-9971 in phase Pending with labels: map[test-pod-static:true] & conditions [] +Oct 27 15:11:56.721: INFO: observed Pod pod-test in namespace pods-9971 in phase Pending with labels: map[test-pod-static:true] & conditions [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 15:11:56 +0000 UTC }] +Oct 27 15:11:56.721: INFO: observed Pod pod-test in namespace pods-9971 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 15:11:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-27 15:11:56 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-27 15:11:56 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 15:11:56 +0000 UTC }] +Oct 27 15:11:57.519: INFO: observed Pod pod-test in namespace pods-9971 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 15:11:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-27 15:11:56 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-27 15:11:56 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 15:11:56 +0000 UTC }] +Oct 27 15:11:58.611: INFO: Found Pod pod-test in namespace pods-9971 in phase Running with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 15:11:56 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 15:11:58 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 15:11:58 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 15:11:56 +0000 UTC }] +STEP: patching the Pod with a new Label and updated data +Oct 27 15:11:58.794: INFO: observed event type ADDED +STEP: getting the Pod and ensuring that it's patched +STEP: replacing the Pod's status Ready condition to False +STEP: check the Pod again to ensure its Ready conditions are False +STEP: deleting the Pod via a Collection with a LabelSelector +STEP: watching for the Pod to be deleted +Oct 27 15:11:59.249: INFO: observed event type ADDED +Oct 27 15:11:59.249: INFO: observed event type MODIFIED +Oct 27 15:11:59.249: INFO: observed event type MODIFIED +Oct 27 15:11:59.249: INFO: observed event type MODIFIED +Oct 27 15:11:59.249: INFO: observed event type MODIFIED +Oct 27 15:11:59.249: INFO: observed event type MODIFIED +Oct 27 15:11:59.249: INFO: observed event type MODIFIED +Oct 27 15:11:59.249: INFO: observed event type MODIFIED +Oct 27 15:12:00.643: INFO: observed event type MODIFIED +Oct 27 15:12:00.776: INFO: observed event type MODIFIED +Oct 27 15:12:01.691: INFO: observed event type MODIFIED +Oct 27 15:12:01.697: INFO: observed event type MODIFIED +[AfterEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:12:01.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-9971" for this suite. +•{"msg":"PASSED [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]","total":346,"completed":238,"skipped":3900,"failed":0} +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should update pod when spec was updated and update strategy is RollingUpdate [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:12:01.881: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename daemonsets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in daemonsets-1723 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:142 +[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:12:02.976: INFO: Creating simple daemon set daemon-set +STEP: Check that daemon pods launch on every node of the cluster. +Oct 27 15:12:03.248: INFO: Number of nodes with available pods: 0 +Oct 27 15:12:03.248: INFO: Node ip-10-250-28-25.ec2.internal is running more than one daemon pod +Oct 27 15:12:04.518: INFO: Number of nodes with available pods: 0 +Oct 27 15:12:04.518: INFO: Node ip-10-250-28-25.ec2.internal is running more than one daemon pod +Oct 27 15:12:05.518: INFO: Number of nodes with available pods: 2 +Oct 27 15:12:05.518: INFO: Number of running nodes: 2, number of available pods: 2 +STEP: Update daemon pods image. +STEP: Check that daemon pods images are updated. +Oct 27 15:12:06.153: INFO: Wrong image for pod: daemon-set-7fds2. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. +Oct 27 15:12:07.334: INFO: Wrong image for pod: daemon-set-7fds2. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. +Oct 27 15:12:08.334: INFO: Pod daemon-set-2fh4n is not available +Oct 27 15:12:08.334: INFO: Wrong image for pod: daemon-set-7fds2. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. +Oct 27 15:12:09.336: INFO: Pod daemon-set-2fh4n is not available +Oct 27 15:12:09.336: INFO: Wrong image for pod: daemon-set-7fds2. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. +Oct 27 15:12:11.334: INFO: Pod daemon-set-lhpbs is not available +STEP: Check that daemon pods are still running on every node of the cluster. +Oct 27 15:12:11.694: INFO: Number of nodes with available pods: 1 +Oct 27 15:12:11.694: INFO: Node ip-10-250-28-25.ec2.internal is running more than one daemon pod +Oct 27 15:12:12.966: INFO: Number of nodes with available pods: 1 +Oct 27 15:12:12.966: INFO: Node ip-10-250-28-25.ec2.internal is running more than one daemon pod +Oct 27 15:12:13.965: INFO: Number of nodes with available pods: 2 +Oct 27 15:12:13.965: INFO: Number of running nodes: 2, number of available pods: 2 +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:108 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1723, will wait for the garbage collector to delete the pods +Oct 27 15:12:14.699: INFO: Deleting DaemonSet.extensions daemon-set took: 91.29689ms +Oct 27 15:12:14.800: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.962901ms +Oct 27 15:12:16.290: INFO: Number of nodes with available pods: 0 +Oct 27 15:12:16.290: INFO: Number of running nodes: 0, number of available pods: 0 +Oct 27 15:12:16.381: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"32820"},"items":null} + +Oct 27 15:12:16.471: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"32820"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:12:16.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-1723" for this suite. +•{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":346,"completed":239,"skipped":3918,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl api-versions + should check if v1 is in available api versions [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:12:16.925: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-2260 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should check if v1 is in available api versions [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: validating api versions +Oct 27 15:12:17.664: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-2260 api-versions' +Oct 27 15:12:18.086: INFO: stderr: "" +Oct 27 15:12:18.086: INFO: stdout: "admissionregistration.k8s.io/v1\napiextensions.k8s.io/v1\napiregistration.k8s.io/v1\napps/v1\nauthentication.k8s.io/v1\nauthorization.k8s.io/v1\nautoscaling.k8s.io/v1\nautoscaling.k8s.io/v1beta2\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncert.gardener.cloud/v1alpha1\ncertificates.k8s.io/v1\ncoordination.k8s.io/v1\ncrd.projectcalico.org/v1\ndiscovery.k8s.io/v1\ndiscovery.k8s.io/v1beta1\ndns.gardener.cloud/v1alpha1\nevents.k8s.io/v1\nevents.k8s.io/v1beta1\nflowcontrol.apiserver.k8s.io/v1beta1\nmetrics.k8s.io/v1beta1\nnetworking.k8s.io/v1\nnode.k8s.io/v1\nnode.k8s.io/v1beta1\npolicy/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nscheduling.k8s.io/v1\nsnapshot.storage.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:12:18.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-2260" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":346,"completed":240,"skipped":3960,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Security Context when creating containers with AllowPrivilegeEscalation + should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:12:18.268: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename security-context-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-test-1689 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 +[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:12:19.097: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-a69c2bc2-4de3-42dd-a6e5-33df9d340cf8" in namespace "security-context-test-1689" to be "Succeeded or Failed" +Oct 27 15:12:19.187: INFO: Pod "alpine-nnp-false-a69c2bc2-4de3-42dd-a6e5-33df9d340cf8": Phase="Pending", Reason="", readiness=false. Elapsed: 90.415526ms +Oct 27 15:12:21.279: INFO: Pod "alpine-nnp-false-a69c2bc2-4de3-42dd-a6e5-33df9d340cf8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.182082766s +Oct 27 15:12:21.279: INFO: Pod "alpine-nnp-false-a69c2bc2-4de3-42dd-a6e5-33df9d340cf8" satisfied condition "Succeeded or Failed" +[AfterEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:12:21.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "security-context-test-1689" for this suite. +•{"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":241,"skipped":4002,"failed":0} +S +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:12:21.649: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-6955 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0777 on node default medium +Oct 27 15:12:22.478: INFO: Waiting up to 5m0s for pod "pod-280df58d-8170-49a3-a469-439477ee2459" in namespace "emptydir-6955" to be "Succeeded or Failed" +Oct 27 15:12:22.569: INFO: Pod "pod-280df58d-8170-49a3-a469-439477ee2459": Phase="Pending", Reason="", readiness=false. Elapsed: 90.407383ms +Oct 27 15:12:24.662: INFO: Pod "pod-280df58d-8170-49a3-a469-439477ee2459": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.183606897s +STEP: Saw pod success +Oct 27 15:12:24.662: INFO: Pod "pod-280df58d-8170-49a3-a469-439477ee2459" satisfied condition "Succeeded or Failed" +Oct 27 15:12:24.753: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod pod-280df58d-8170-49a3-a469-439477ee2459 container test-container: +STEP: delete the pod +Oct 27 15:12:24.945: INFO: Waiting for pod pod-280df58d-8170-49a3-a469-439477ee2459 to disappear +Oct 27 15:12:25.035: INFO: Pod pod-280df58d-8170-49a3-a469-439477ee2459 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:12:25.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-6955" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":242,"skipped":4003,"failed":0} +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] CronJob + should support CronJob API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:12:25.306: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename cronjob +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in cronjob-2441 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support CronJob API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a cronjob +STEP: creating +STEP: getting +STEP: listing +STEP: watching +Oct 27 15:12:26.309: INFO: starting watch +STEP: cluster-wide listing +STEP: cluster-wide watching +Oct 27 15:12:26.489: INFO: starting watch +STEP: patching +STEP: updating +Oct 27 15:12:26.855: INFO: waiting for watch events with expected annotations +Oct 27 15:12:26.855: INFO: saw patched and updated annotations +STEP: patching /status +STEP: updating /status +STEP: get /status +STEP: deleting +STEP: deleting a collection +[AfterEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:12:27.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "cronjob-2441" for this suite. +•{"msg":"PASSED [sig-apps] CronJob should support CronJob API operations [Conformance]","total":346,"completed":243,"skipped":4020,"failed":0} +SS +------------------------------ +[sig-apps] Deployment + RecreateDeployment should delete old pods and create new ones [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:12:27.855: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename deployment +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in deployment-2755 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 +[It] RecreateDeployment should delete old pods and create new ones [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:12:28.587: INFO: Creating deployment "test-recreate-deployment" +Oct 27 15:12:28.678: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 +Oct 27 15:12:28.858: INFO: Waiting deployment "test-recreate-deployment" to complete +Oct 27 15:12:28.948: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944348, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944348, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944348, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944348, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6cb8b65c46\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 27 15:12:31.040: INFO: Triggering a new rollout for deployment "test-recreate-deployment" +Oct 27 15:12:31.221: INFO: Updating deployment test-recreate-deployment +Oct 27 15:12:31.221: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 +Oct 27 15:12:31.401: INFO: Deployment "test-recreate-deployment": +&Deployment{ObjectMeta:{test-recreate-deployment deployment-2755 14c2b3cd-d824-4980-9027-7739ebbd8d80 33020 2 2021-10-27 15:12:28 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-10-27 15:12:31 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 15:12:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc006b9b9d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2021-10-27 15:12:31 +0000 UTC,LastTransitionTime:2021-10-27 15:12:31 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-85d47dcb4" is progressing.,LastUpdateTime:2021-10-27 15:12:31 +0000 UTC,LastTransitionTime:2021-10-27 15:12:28 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} + +Oct 27 15:12:31.492: INFO: New ReplicaSet "test-recreate-deployment-85d47dcb4" of Deployment "test-recreate-deployment": +&ReplicaSet{ObjectMeta:{test-recreate-deployment-85d47dcb4 deployment-2755 d3a568cb-b086-4c08-a1ed-8f1373c8100f 33019 1 2021-10-27 15:12:31 +0000 UTC map[name:sample-pod-3 pod-template-hash:85d47dcb4] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 14c2b3cd-d824-4980-9027-7739ebbd8d80 0xc006b9be90 0xc006b9be91}] [] [{kube-controller-manager Update apps/v1 2021-10-27 15:12:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"14c2b3cd-d824-4980-9027-7739ebbd8d80\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 15:12:31 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 85d47dcb4,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:85d47dcb4] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc006b9bf28 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Oct 27 15:12:31.492: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": +Oct 27 15:12:31.492: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-6cb8b65c46 deployment-2755 e40adbe4-5dfb-4721-bd2c-20e7b3aeea28 33012 2 2021-10-27 15:12:28 +0000 UTC map[name:sample-pod-3 pod-template-hash:6cb8b65c46] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 14c2b3cd-d824-4980-9027-7739ebbd8d80 0xc006b9bd77 0xc006b9bd78}] [] [{kube-controller-manager Update apps/v1 2021-10-27 15:12:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"14c2b3cd-d824-4980-9027-7739ebbd8d80\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 15:12:31 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6cb8b65c46,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:6cb8b65c46] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc006b9be28 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Oct 27 15:12:31.583: INFO: Pod "test-recreate-deployment-85d47dcb4-km2wc" is not available: +&Pod{ObjectMeta:{test-recreate-deployment-85d47dcb4-km2wc test-recreate-deployment-85d47dcb4- deployment-2755 34a14ee1-1166-42c9-8e62-fdaf9b34a1dd 33021 0 2021-10-27 15:12:31 +0000 UTC map[name:sample-pod-3 pod-template-hash:85d47dcb4] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet test-recreate-deployment-85d47dcb4 d3a568cb-b086-4c08-a1ed-8f1373c8100f 0xc003aa0380 0xc003aa0381}] [] [{kube-controller-manager Update v1 2021-10-27 15:12:31 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d3a568cb-b086-4c08-a1ed-8f1373c8100f\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:12:31 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-kc6f8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tm94z-0j6.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kc6f8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-28-25.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:12:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:12:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:12:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:12:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.28.25,PodIP:,StartTime:2021-10-27 15:12:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:12:31.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-2755" for this suite. +•{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":346,"completed":244,"skipped":4022,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + should validate Statefulset Status endpoints [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:12:31.854: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename statefulset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-7607 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:92 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:107 +STEP: Creating service test in namespace statefulset-7607 +[It] should validate Statefulset Status endpoints [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating statefulset ss in namespace statefulset-7607 +Oct 27 15:12:33.104: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false +Oct 27 15:12:43.195: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: Patch Statefulset to include a label +STEP: Getting /status +Oct 27 15:12:43.570: INFO: StatefulSet ss has Conditions: []v1.StatefulSetCondition(nil) +STEP: updating the StatefulSet Status +Oct 27 15:12:43.752: INFO: updatedStatus.Conditions: []v1.StatefulSetCondition{v1.StatefulSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"E2E", Message:"Set from e2e test"}} +STEP: watching for the statefulset status to be updated +Oct 27 15:12:43.842: INFO: Observed &StatefulSet event: ADDED +Oct 27 15:12:43.842: INFO: Found Statefulset ss in namespace statefulset-7607 with labels: map[e2e:testing] annotations: map[] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} +Oct 27 15:12:43.842: INFO: Statefulset ss has an updated status +STEP: patching the Statefulset Status +Oct 27 15:12:43.842: INFO: Patch payload: {"status":{"conditions":[{"type":"StatusPatched","status":"True"}]}} +Oct 27 15:12:44.007: INFO: Patched status conditions: []v1.StatefulSetCondition{v1.StatefulSetCondition{Type:"StatusPatched", Status:"True", LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"", Message:""}} +STEP: watching for the Statefulset status to be patched +Oct 27 15:12:44.101: INFO: Observed &StatefulSet event: ADDED +Oct 27 15:12:44.101: INFO: Observed Statefulset ss in namespace statefulset-7607 with annotations: map[] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} +Oct 27 15:12:44.101: INFO: Observed &StatefulSet event: MODIFIED +Oct 27 15:12:44.101: INFO: Found Statefulset ss in namespace statefulset-7607 with labels: map[e2e:testing] annotations: map[] & Conditions: {StatusPatched True 0001-01-01 00:00:00 +0000 UTC } +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118 +Oct 27 15:12:44.101: INFO: Deleting all statefulset in ns statefulset-7607 +Oct 27 15:12:44.191: INFO: Scaling statefulset ss to 0 +Oct 27 15:12:54.554: INFO: Waiting for statefulset status.replicas updated to 0 +Oct 27 15:12:54.644: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:12:54.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-7607" for this suite. +•{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]","total":346,"completed":245,"skipped":4073,"failed":0} +SSS +------------------------------ +[sig-node] Sysctls [LinuxOnly] [NodeConformance] + should support sysctls [MinimumKubeletVersion:1.21] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:36 +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:12:55.186: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename sysctl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sysctl-8496 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:65 +[It] should support sysctls [MinimumKubeletVersion:1.21] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod with the kernel.shm_rmid_forced sysctl +STEP: Watching for error events or started pod +STEP: Waiting for pod completion +STEP: Checking that the pod succeeded +STEP: Getting logs from the pod +STEP: Checking that the sysctl is actually updated +[AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:12:58.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sysctl-8496" for this suite. +•{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":346,"completed":246,"skipped":4076,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should update annotations on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:12:58.659: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-8758 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should update annotations on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating the pod +Oct 27 15:12:59.578: INFO: The status of Pod annotationupdated2b16483-23a1-488b-a226-fdf756be3bd5 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:13:01.669: INFO: The status of Pod annotationupdated2b16483-23a1-488b-a226-fdf756be3bd5 is Running (Ready = true) +Oct 27 15:13:02.543: INFO: Successfully updated pod "annotationupdated2b16483-23a1-488b-a226-fdf756be3bd5" +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:13:04.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-8758" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":346,"completed":247,"skipped":4088,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook + should execute prestop exec hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Container Lifecycle Hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:13:05.010: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-lifecycle-hook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-lifecycle-hook-6116 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] when create a pod with lifecycle hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 +STEP: create the container to handle the HTTPGet hook request. +Oct 27 15:13:06.002: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:13:08.092: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) +[It] should execute prestop exec hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the pod with lifecycle hook +Oct 27 15:13:08.368: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:13:10.459: INFO: The status of Pod pod-with-prestop-exec-hook is Running (Ready = true) +STEP: delete the pod with lifecycle hook +Oct 27 15:13:10.641: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Oct 27 15:13:10.731: INFO: Pod pod-with-prestop-exec-hook still exists +Oct 27 15:13:12.731: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Oct 27 15:13:12.822: INFO: Pod pod-with-prestop-exec-hook still exists +Oct 27 15:13:14.731: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Oct 27 15:13:14.822: INFO: Pod pod-with-prestop-exec-hook no longer exists +STEP: check prestop hook +[AfterEach] [sig-node] Container Lifecycle Hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:13:14.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-lifecycle-hook-6116" for this suite. +•{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":346,"completed":248,"skipped":4165,"failed":0} +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:13:15.191: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename gc +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-5359 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the rc +STEP: delete the rc +STEP: wait for the rc to be deleted +STEP: Gathering metrics +Oct 27 15:13:22.656: INFO: For apiserver_request_total: +For apiserver_request_latency_seconds: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +W1027 15:13:22.656166 5725 metrics_grabber.go:151] Can't find kube-controller-manager pod. Grabbing metrics from kube-controller-manager is disabled. +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:13:22.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-5359" for this suite. +•{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":346,"completed":249,"skipped":4186,"failed":0} +SSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:13:22.838: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-1541 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 27 15:13:24.685: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944404, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944404, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944404, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944404, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 27 15:13:27.874: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API +STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API +STEP: Creating a dummy validating-webhook-configuration object +STEP: Deleting the validating-webhook-configuration, which should be possible to remove +STEP: Creating a dummy mutating-webhook-configuration object +STEP: Deleting the mutating-webhook-configuration, which should be possible to remove +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:13:29.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-1541" for this suite. +STEP: Destroying namespace "webhook-1541-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":346,"completed":250,"skipped":4194,"failed":0} +S +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should unconditionally reject operations on fail closed webhook [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:13:29.949: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-7965 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 27 15:13:32.177: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944411, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944411, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944411, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944411, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 27 15:13:35.365: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should unconditionally reject operations on fail closed webhook [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API +STEP: create a namespace for the webhook +STEP: create a configmap should be unconditionally rejected by the webhook +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:13:36.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-7965" for this suite. +STEP: Destroying namespace "webhook-7965-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":346,"completed":251,"skipped":4195,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-network] DNS + should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:13:36.791: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename dns +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-5412 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5412.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-5412.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5412.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5412.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-5412.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5412.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done + +STEP: creating a pod to probe /etc/hosts +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Oct 27 15:13:40.739: INFO: DNS probes using dns-5412/dns-test-357bb6c4-f452-4f5b-b932-2a84ae96e274 succeeded + +STEP: deleting the pod +[AfterEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:13:40.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-5412" for this suite. +•{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":346,"completed":252,"skipped":4211,"failed":0} +SSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl replace + should update a single-container pod's image [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:13:41.108: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-2832 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[BeforeEach] Kubectl replace + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1558 +[It] should update a single-container pod's image [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 +Oct 27 15:13:41.840: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-2832 run e2e-test-httpd-pod --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 --pod-running-timeout=2m0s --labels=run=e2e-test-httpd-pod' +Oct 27 15:13:42.593: INFO: stderr: "" +Oct 27 15:13:42.593: INFO: stdout: "pod/e2e-test-httpd-pod created\n" +STEP: verifying the pod e2e-test-httpd-pod is running +STEP: verifying the pod e2e-test-httpd-pod was created +Oct 27 15:13:47.694: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-2832 get pod e2e-test-httpd-pod -o json' +Oct 27 15:13:48.057: INFO: stderr: "" +Oct 27 15:13:48.057: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"annotations\": {\n \"cni.projectcalico.org/containerID\": \"8a79138fe5502b1b2e82e27778b2b8c0902e1b3ac084963847abc7c036843430\",\n \"cni.projectcalico.org/podIP\": \"100.96.1.56/32\",\n \"cni.projectcalico.org/podIPs\": \"100.96.1.56/32\",\n \"kubernetes.io/psp\": \"e2e-test-privileged-psp\"\n },\n \"creationTimestamp\": \"2021-10-27T15:13:42Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-2832\",\n \"resourceVersion\": \"33773\",\n \"uid\": \"66c0121a-f18f-424c-9094-1ca896244b9b\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"env\": [\n {\n \"name\": \"KUBERNETES_SERVICE_HOST\",\n \"value\": \"api.tm94z-0j6.it.internal.staging.k8s.ondemand.com\"\n }\n ],\n \"image\": \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"kube-api-access-2pb7m\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"ip-10-250-28-25.ec2.internal\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"kube-api-access-2pb7m\",\n \"projected\": {\n \"defaultMode\": 420,\n \"sources\": [\n {\n \"serviceAccountToken\": {\n \"expirationSeconds\": 3607,\n \"path\": \"token\"\n }\n },\n {\n \"configMap\": {\n \"items\": [\n {\n \"key\": \"ca.crt\",\n \"path\": \"ca.crt\"\n }\n ],\n \"name\": \"kube-root-ca.crt\"\n }\n },\n {\n \"downwardAPI\": {\n \"items\": [\n {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"metadata.namespace\"\n },\n \"path\": \"namespace\"\n }\n ]\n }\n }\n ]\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-10-27T15:13:42Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-10-27T15:13:44Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-10-27T15:13:44Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-10-27T15:13:42Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"docker://ded7a4459f58e0410443876724a077b02abddfffc664dcaa5d059b05b3a0b8e0\",\n \"image\": \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\",\n \"imageID\": \"docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2021-10-27T15:13:43Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.250.28.25\",\n \"phase\": \"Running\",\n \"podIP\": \"100.96.1.56\",\n \"podIPs\": [\n {\n \"ip\": \"100.96.1.56\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2021-10-27T15:13:42Z\"\n }\n}\n" +STEP: replace the image in the pod +Oct 27 15:13:48.057: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-2832 replace -f -' +Oct 27 15:13:48.611: INFO: stderr: "" +Oct 27 15:13:48.611: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" +STEP: verifying the pod e2e-test-httpd-pod has the right image k8s.gcr.io/e2e-test-images/busybox:1.29-1 +[AfterEach] Kubectl replace + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562 +Oct 27 15:13:48.701: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-2832 delete pods e2e-test-httpd-pod' +Oct 27 15:13:50.676: INFO: stderr: "" +Oct 27 15:13:50.676: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:13:50.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-2832" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":346,"completed":253,"skipped":4219,"failed":0} +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] PreStop + should call prestop when killing a pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] PreStop + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:13:50.947: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename prestop +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in prestop-4115 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] PreStop + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:157 +[It] should call prestop when killing a pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating server pod server in namespace prestop-4115 +STEP: Waiting for pods to come up. +STEP: Creating tester pod tester in namespace prestop-4115 +STEP: Deleting pre-stop pod +Oct 27 15:14:01.561: INFO: Saw: { + "Hostname": "server", + "Sent": null, + "Received": { + "prestop": 1 + }, + "Errors": null, + "Log": [ + "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", + "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." + ], + "StillContactingPeers": true +} +STEP: Deleting the server pod +[AfterEach] [sig-node] PreStop + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:14:01.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "prestop-4115" for this suite. +•{"msg":"PASSED [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":346,"completed":254,"skipped":4239,"failed":0} +SS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:14:01.928: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-1833 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name projected-configmap-test-volume-1d6dbf01-ab57-4cb7-b911-83fdd5db69fc +STEP: Creating a pod to test consume configMaps +Oct 27 15:14:02.846: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f90a56c9-cbb2-44f9-b467-070f6ade76c9" in namespace "projected-1833" to be "Succeeded or Failed" +Oct 27 15:14:02.937: INFO: Pod "pod-projected-configmaps-f90a56c9-cbb2-44f9-b467-070f6ade76c9": Phase="Pending", Reason="", readiness=false. Elapsed: 90.448066ms +Oct 27 15:14:05.028: INFO: Pod "pod-projected-configmaps-f90a56c9-cbb2-44f9-b467-070f6ade76c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.18157891s +STEP: Saw pod success +Oct 27 15:14:05.028: INFO: Pod "pod-projected-configmaps-f90a56c9-cbb2-44f9-b467-070f6ade76c9" satisfied condition "Succeeded or Failed" +Oct 27 15:14:05.118: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod pod-projected-configmaps-f90a56c9-cbb2-44f9-b467-070f6ade76c9 container agnhost-container: +STEP: delete the pod +Oct 27 15:14:05.309: INFO: Waiting for pod pod-projected-configmaps-f90a56c9-cbb2-44f9-b467-070f6ade76c9 to disappear +Oct 27 15:14:05.400: INFO: Pod pod-projected-configmaps-f90a56c9-cbb2-44f9-b467-070f6ade76c9 no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:14:05.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-1833" for this suite. +•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":346,"completed":255,"skipped":4241,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:14:05.670: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-4951 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating projection with secret that has name projected-secret-test-map-6fc1299f-cf05-4a3f-9783-c1f615fdf666 +STEP: Creating a pod to test consume secrets +Oct 27 15:14:06.591: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5a83493c-4cc6-4d48-8741-506f4357e3b6" in namespace "projected-4951" to be "Succeeded or Failed" +Oct 27 15:14:06.682: INFO: Pod "pod-projected-secrets-5a83493c-4cc6-4d48-8741-506f4357e3b6": Phase="Pending", Reason="", readiness=false. Elapsed: 90.567814ms +Oct 27 15:14:08.773: INFO: Pod "pod-projected-secrets-5a83493c-4cc6-4d48-8741-506f4357e3b6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.182237315s +STEP: Saw pod success +Oct 27 15:14:08.773: INFO: Pod "pod-projected-secrets-5a83493c-4cc6-4d48-8741-506f4357e3b6" satisfied condition "Succeeded or Failed" +Oct 27 15:14:08.864: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod pod-projected-secrets-5a83493c-4cc6-4d48-8741-506f4357e3b6 container projected-secret-volume-test: +STEP: delete the pod +Oct 27 15:14:09.095: INFO: Waiting for pod pod-projected-secrets-5a83493c-4cc6-4d48-8741-506f4357e3b6 to disappear +Oct 27 15:14:09.185: INFO: Pod pod-projected-secrets-5a83493c-4cc6-4d48-8741-506f4357e3b6 no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:14:09.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-4951" for this suite. +•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":346,"completed":256,"skipped":4256,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:14:09.456: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-3707 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating projection with secret that has name projected-secret-test-de3a0291-6e85-4dad-bc7e-723a98d4d420 +STEP: Creating a pod to test consume secrets +Oct 27 15:14:10.376: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6e500b68-5457-4bd7-bec3-17dc27d07f3b" in namespace "projected-3707" to be "Succeeded or Failed" +Oct 27 15:14:10.476: INFO: Pod "pod-projected-secrets-6e500b68-5457-4bd7-bec3-17dc27d07f3b": Phase="Pending", Reason="", readiness=false. Elapsed: 100.172018ms +Oct 27 15:14:12.567: INFO: Pod "pod-projected-secrets-6e500b68-5457-4bd7-bec3-17dc27d07f3b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.191265031s +STEP: Saw pod success +Oct 27 15:14:12.567: INFO: Pod "pod-projected-secrets-6e500b68-5457-4bd7-bec3-17dc27d07f3b" satisfied condition "Succeeded or Failed" +Oct 27 15:14:12.658: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod pod-projected-secrets-6e500b68-5457-4bd7-bec3-17dc27d07f3b container projected-secret-volume-test: +STEP: delete the pod +Oct 27 15:14:12.892: INFO: Waiting for pod pod-projected-secrets-6e500b68-5457-4bd7-bec3-17dc27d07f3b to disappear +Oct 27 15:14:12.981: INFO: Pod pod-projected-secrets-6e500b68-5457-4bd7-bec3-17dc27d07f3b no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:14:12.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-3707" for this suite. +•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":346,"completed":257,"skipped":4267,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Kubelet when scheduling a busybox command in a pod + should print the output to logs [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:14:13.253: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubelet-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubelet-test-9732 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 +[It] should print the output to logs [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:14:14.171: INFO: The status of Pod busybox-scheduling-a9543a41-5828-4908-9b61-963f62b4515f is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:14:16.262: INFO: The status of Pod busybox-scheduling-a9543a41-5828-4908-9b61-963f62b4515f is Running (Ready = true) +[AfterEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:14:16.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubelet-test-9732" for this suite. +•{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":346,"completed":258,"skipped":4294,"failed":0} +SSSSSS +------------------------------ +[sig-network] Service endpoints latency + should not be very high [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Service endpoints latency + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:14:16.720: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename svc-latency +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in svc-latency-3282 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not be very high [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:14:17.452: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: creating replication controller svc-latency-rc in namespace svc-latency-3282 +I1027 15:14:17.549395 5725 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-3282, replica count: 1 +I1027 15:14:18.650951 5725 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I1027 15:14:19.651232 5725 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Oct 27 15:14:19.847: INFO: Created: latency-svc-plx7m +Oct 27 15:14:19.852: INFO: Got endpoints: latency-svc-plx7m [100.693978ms] +Oct 27 15:14:20.104: INFO: Created: latency-svc-bmf5g +Oct 27 15:14:20.108: INFO: Got endpoints: latency-svc-bmf5g [255.248601ms] +Oct 27 15:14:20.108: INFO: Created: latency-svc-2lwf9 +Oct 27 15:14:20.112: INFO: Created: latency-svc-pllk9 +Oct 27 15:14:20.112: INFO: Got endpoints: latency-svc-2lwf9 [258.233467ms] +Oct 27 15:14:20.116: INFO: Got endpoints: latency-svc-pllk9 [262.141078ms] +Oct 27 15:14:20.188: INFO: Created: latency-svc-4w8fz +Oct 27 15:14:20.204: INFO: Got endpoints: latency-svc-4w8fz [350.966015ms] +Oct 27 15:14:20.205: INFO: Created: latency-svc-p5lpx +Oct 27 15:14:20.219: INFO: Created: latency-svc-fjrgc +Oct 27 15:14:20.219: INFO: Created: latency-svc-7x6q2 +Oct 27 15:14:20.219: INFO: Got endpoints: latency-svc-7x6q2 [365.818465ms] +Oct 27 15:14:20.219: INFO: Got endpoints: latency-svc-p5lpx [366.246235ms] +Oct 27 15:14:20.220: INFO: Got endpoints: latency-svc-fjrgc [366.267857ms] +Oct 27 15:14:20.220: INFO: Created: latency-svc-5kv9r +Oct 27 15:14:20.222: INFO: Got endpoints: latency-svc-5kv9r [367.920029ms] +Oct 27 15:14:20.225: INFO: Created: latency-svc-bpblk +Oct 27 15:14:20.229: INFO: Got endpoints: latency-svc-bpblk [374.711715ms] +Oct 27 15:14:20.229: INFO: Created: latency-svc-plr99 +Oct 27 15:14:20.233: INFO: Got endpoints: latency-svc-plr99 [379.059277ms] +Oct 27 15:14:20.289: INFO: Created: latency-svc-zglvs +Oct 27 15:14:20.289: INFO: Created: latency-svc-gkx4s +Oct 27 15:14:20.289: INFO: Created: latency-svc-skvdx +Oct 27 15:14:20.289: INFO: Got endpoints: latency-svc-zglvs [435.30911ms] +Oct 27 15:14:20.289: INFO: Got endpoints: latency-svc-skvdx [434.690345ms] +Oct 27 15:14:20.292: INFO: Created: latency-svc-brtlq +Oct 27 15:14:20.292: INFO: Created: latency-svc-4t9mc +Oct 27 15:14:20.292: INFO: Got endpoints: latency-svc-4t9mc [439.319346ms] +Oct 27 15:14:20.292: INFO: Got endpoints: latency-svc-gkx4s [438.939744ms] +Oct 27 15:14:20.301: INFO: Created: latency-svc-hx2nc +Oct 27 15:14:20.302: INFO: Got endpoints: latency-svc-brtlq [447.668977ms] +Oct 27 15:14:20.305: INFO: Got endpoints: latency-svc-hx2nc [196.96807ms] +Oct 27 15:14:20.305: INFO: Created: latency-svc-xx5lx +Oct 27 15:14:20.310: INFO: Created: latency-svc-nmxzh +Oct 27 15:14:20.311: INFO: Got endpoints: latency-svc-xx5lx [198.411284ms] +Oct 27 15:14:20.314: INFO: Created: latency-svc-75hz6 +Oct 27 15:14:20.314: INFO: Got endpoints: latency-svc-nmxzh [197.670037ms] +Oct 27 15:14:20.318: INFO: Got endpoints: latency-svc-75hz6 [113.499902ms] +Oct 27 15:14:20.318: INFO: Created: latency-svc-sh8k7 +Oct 27 15:14:20.319: INFO: Got endpoints: latency-svc-sh8k7 [100.178944ms] +Oct 27 15:14:20.323: INFO: Created: latency-svc-cs8xr +Oct 27 15:14:20.327: INFO: Got endpoints: latency-svc-cs8xr [107.839158ms] +Oct 27 15:14:20.327: INFO: Created: latency-svc-vfp6s +Oct 27 15:14:20.329: INFO: Got endpoints: latency-svc-vfp6s [109.379648ms] +Oct 27 15:14:20.334: INFO: Created: latency-svc-bqw94 +Oct 27 15:14:20.336: INFO: Got endpoints: latency-svc-bqw94 [114.473563ms] +Oct 27 15:14:20.339: INFO: Created: latency-svc-tzxrw +Oct 27 15:14:20.341: INFO: Got endpoints: latency-svc-tzxrw [112.673469ms] +Oct 27 15:14:20.344: INFO: Created: latency-svc-dsvcb +Oct 27 15:14:20.346: INFO: Got endpoints: latency-svc-dsvcb [112.774997ms] +Oct 27 15:14:20.405: INFO: Created: latency-svc-q8dqn +Oct 27 15:14:20.410: INFO: Got endpoints: latency-svc-q8dqn [98.978227ms] +Oct 27 15:14:20.410: INFO: Created: latency-svc-qd2b2 +Oct 27 15:14:20.411: INFO: Got endpoints: latency-svc-qd2b2 [106.064787ms] +Oct 27 15:14:20.416: INFO: Created: latency-svc-xm64g +Oct 27 15:14:20.417: INFO: Got endpoints: latency-svc-xm64g [124.866833ms] +Oct 27 15:14:20.420: INFO: Created: latency-svc-z6dvb +Oct 27 15:14:20.422: INFO: Got endpoints: latency-svc-z6dvb [120.248343ms] +Oct 27 15:14:20.425: INFO: Created: latency-svc-rj22s +Oct 27 15:14:20.427: INFO: Got endpoints: latency-svc-rj22s [138.011578ms] +Oct 27 15:14:20.431: INFO: Created: latency-svc-xjtzx +Oct 27 15:14:20.432: INFO: Got endpoints: latency-svc-xjtzx [140.007531ms] +Oct 27 15:14:20.435: INFO: Created: latency-svc-m7f6k +Oct 27 15:14:20.437: INFO: Got endpoints: latency-svc-m7f6k [148.268925ms] +Oct 27 15:14:20.440: INFO: Created: latency-svc-xbrv7 +Oct 27 15:14:20.442: INFO: Got endpoints: latency-svc-xbrv7 [128.752977ms] +Oct 27 15:14:20.444: INFO: Created: latency-svc-pzmc7 +Oct 27 15:14:20.449: INFO: Created: latency-svc-5rtlg +Oct 27 15:14:20.449: INFO: Got endpoints: latency-svc-pzmc7 [131.225396ms] +Oct 27 15:14:20.500: INFO: Got endpoints: latency-svc-5rtlg [180.365141ms] +Oct 27 15:14:20.501: INFO: Created: latency-svc-qbxdr +Oct 27 15:14:20.505: INFO: Created: latency-svc-pb2kv +Oct 27 15:14:20.509: INFO: Created: latency-svc-5gdkl +Oct 27 15:14:20.518: INFO: Created: latency-svc-9cr8p +Oct 27 15:14:20.522: INFO: Created: latency-svc-vxtb4 +Oct 27 15:14:20.525: INFO: Got endpoints: latency-svc-qbxdr [198.085973ms] +Oct 27 15:14:20.526: INFO: Got endpoints: latency-svc-pb2kv [196.530677ms] +Oct 27 15:14:20.528: INFO: Got endpoints: latency-svc-5gdkl [192.119122ms] +Oct 27 15:14:20.535: INFO: Created: latency-svc-skdpq +Oct 27 15:14:20.538: INFO: Created: latency-svc-mvjjw +Oct 27 15:14:20.543: INFO: Created: latency-svc-x8zpn +Oct 27 15:14:20.603: INFO: Got endpoints: latency-svc-9cr8p [261.529563ms] +Oct 27 15:14:20.603: INFO: Created: latency-svc-xj7nf +Oct 27 15:14:20.608: INFO: Created: latency-svc-g4fl2 +Oct 27 15:14:20.608: INFO: Got endpoints: latency-svc-vxtb4 [262.677819ms] +Oct 27 15:14:20.612: INFO: Created: latency-svc-s8t27 +Oct 27 15:14:20.617: INFO: Created: latency-svc-wqbsg +Oct 27 15:14:20.621: INFO: Created: latency-svc-86hbh +Oct 27 15:14:20.626: INFO: Created: latency-svc-9dqm2 +Oct 27 15:14:20.630: INFO: Created: latency-svc-5rp4r +Oct 27 15:14:20.634: INFO: Created: latency-svc-45mmf +Oct 27 15:14:20.639: INFO: Created: latency-svc-hx6c5 +Oct 27 15:14:20.644: INFO: Created: latency-svc-xgkdw +Oct 27 15:14:20.663: INFO: Got endpoints: latency-svc-skdpq [253.644281ms] +Oct 27 15:14:20.698: INFO: Created: latency-svc-vfpq2 +Oct 27 15:14:20.703: INFO: Created: latency-svc-9bbgz +Oct 27 15:14:20.706: INFO: Got endpoints: latency-svc-mvjjw [295.337846ms] +Oct 27 15:14:20.759: INFO: Created: latency-svc-ssmxt +Oct 27 15:14:20.759: INFO: Got endpoints: latency-svc-x8zpn [341.543962ms] +Oct 27 15:14:20.801: INFO: Created: latency-svc-2hmqv +Oct 27 15:14:20.807: INFO: Got endpoints: latency-svc-xj7nf [385.017046ms] +Oct 27 15:14:20.854: INFO: Created: latency-svc-cfgck +Oct 27 15:14:20.859: INFO: Got endpoints: latency-svc-g4fl2 [431.805591ms] +Oct 27 15:14:20.902: INFO: Created: latency-svc-b2v8p +Oct 27 15:14:20.906: INFO: Got endpoints: latency-svc-s8t27 [473.890848ms] +Oct 27 15:14:20.954: INFO: Created: latency-svc-qtkf2 +Oct 27 15:14:20.956: INFO: Got endpoints: latency-svc-wqbsg [519.191242ms] +Oct 27 15:14:21.002: INFO: Created: latency-svc-9hbgx +Oct 27 15:14:21.007: INFO: Got endpoints: latency-svc-86hbh [564.571592ms] +Oct 27 15:14:21.052: INFO: Created: latency-svc-gghpx +Oct 27 15:14:21.057: INFO: Got endpoints: latency-svc-9dqm2 [607.839705ms] +Oct 27 15:14:21.103: INFO: Created: latency-svc-z7r8x +Oct 27 15:14:21.106: INFO: Got endpoints: latency-svc-5rp4r [606.516033ms] +Oct 27 15:14:21.153: INFO: Created: latency-svc-vbq42 +Oct 27 15:14:21.157: INFO: Got endpoints: latency-svc-45mmf [631.42799ms] +Oct 27 15:14:21.204: INFO: Created: latency-svc-b2vsz +Oct 27 15:14:21.207: INFO: Got endpoints: latency-svc-hx6c5 [680.816286ms] +Oct 27 15:14:21.254: INFO: Created: latency-svc-glczf +Oct 27 15:14:21.257: INFO: Got endpoints: latency-svc-xgkdw [728.992966ms] +Oct 27 15:14:21.302: INFO: Created: latency-svc-rjzwg +Oct 27 15:14:21.309: INFO: Got endpoints: latency-svc-vfpq2 [705.551272ms] +Oct 27 15:14:21.356: INFO: Created: latency-svc-98p4n +Oct 27 15:14:21.359: INFO: Got endpoints: latency-svc-9bbgz [750.183491ms] +Oct 27 15:14:21.404: INFO: Created: latency-svc-fhl6v +Oct 27 15:14:21.407: INFO: Got endpoints: latency-svc-ssmxt [743.456585ms] +Oct 27 15:14:21.454: INFO: Created: latency-svc-ft8qd +Oct 27 15:14:21.459: INFO: Got endpoints: latency-svc-2hmqv [752.388047ms] +Oct 27 15:14:21.503: INFO: Created: latency-svc-tlncs +Oct 27 15:14:21.510: INFO: Got endpoints: latency-svc-cfgck [750.694882ms] +Oct 27 15:14:21.554: INFO: Created: latency-svc-clc74 +Oct 27 15:14:21.556: INFO: Got endpoints: latency-svc-b2v8p [748.832707ms] +Oct 27 15:14:21.606: INFO: Created: latency-svc-nrv9n +Oct 27 15:14:21.608: INFO: Got endpoints: latency-svc-qtkf2 [749.444923ms] +Oct 27 15:14:21.651: INFO: Created: latency-svc-bbmrl +Oct 27 15:14:21.657: INFO: Got endpoints: latency-svc-9hbgx [750.411843ms] +Oct 27 15:14:21.704: INFO: Created: latency-svc-n5xr7 +Oct 27 15:14:21.708: INFO: Got endpoints: latency-svc-gghpx [751.634366ms] +Oct 27 15:14:21.752: INFO: Created: latency-svc-zwjdt +Oct 27 15:14:21.756: INFO: Got endpoints: latency-svc-z7r8x [748.994436ms] +Oct 27 15:14:21.807: INFO: Created: latency-svc-vpjs6 +Oct 27 15:14:21.807: INFO: Got endpoints: latency-svc-vbq42 [750.166917ms] +Oct 27 15:14:21.852: INFO: Created: latency-svc-vjr5s +Oct 27 15:14:21.856: INFO: Got endpoints: latency-svc-b2vsz [749.832444ms] +Oct 27 15:14:21.902: INFO: Created: latency-svc-8dkcv +Oct 27 15:14:21.909: INFO: Got endpoints: latency-svc-glczf [752.203941ms] +Oct 27 15:14:21.952: INFO: Created: latency-svc-zf9j5 +Oct 27 15:14:21.957: INFO: Got endpoints: latency-svc-rjzwg [750.529803ms] +Oct 27 15:14:22.004: INFO: Created: latency-svc-cwggw +Oct 27 15:14:22.008: INFO: Got endpoints: latency-svc-98p4n [748.107708ms] +Oct 27 15:14:22.054: INFO: Created: latency-svc-cnbgc +Oct 27 15:14:22.059: INFO: Got endpoints: latency-svc-fhl6v [750.25918ms] +Oct 27 15:14:22.104: INFO: Created: latency-svc-cv24d +Oct 27 15:14:22.107: INFO: Got endpoints: latency-svc-ft8qd [748.32643ms] +Oct 27 15:14:22.154: INFO: Created: latency-svc-ktlfh +Oct 27 15:14:22.156: INFO: Got endpoints: latency-svc-tlncs [749.693257ms] +Oct 27 15:14:22.216: INFO: Created: latency-svc-9cldc +Oct 27 15:14:22.216: INFO: Got endpoints: latency-svc-clc74 [757.152264ms] +Oct 27 15:14:22.256: INFO: Created: latency-svc-nwld2 +Oct 27 15:14:22.258: INFO: Got endpoints: latency-svc-nrv9n [748.199097ms] +Oct 27 15:14:22.307: INFO: Got endpoints: latency-svc-bbmrl [750.540699ms] +Oct 27 15:14:22.312: INFO: Created: latency-svc-gt6zk +Oct 27 15:14:22.354: INFO: Created: latency-svc-vhqzr +Oct 27 15:14:22.357: INFO: Got endpoints: latency-svc-n5xr7 [748.311081ms] +Oct 27 15:14:22.401: INFO: Created: latency-svc-bwlmf +Oct 27 15:14:22.410: INFO: Got endpoints: latency-svc-zwjdt [753.217448ms] +Oct 27 15:14:22.455: INFO: Created: latency-svc-r4z7z +Oct 27 15:14:22.459: INFO: Got endpoints: latency-svc-vpjs6 [751.040917ms] +Oct 27 15:14:22.508: INFO: Created: latency-svc-vr8x8 +Oct 27 15:14:22.508: INFO: Got endpoints: latency-svc-vjr5s [752.047319ms] +Oct 27 15:14:22.557: INFO: Got endpoints: latency-svc-8dkcv [750.286545ms] +Oct 27 15:14:22.607: INFO: Got endpoints: latency-svc-zf9j5 [750.471279ms] +Oct 27 15:14:22.651: INFO: Created: latency-svc-w2q8g +Oct 27 15:14:22.655: INFO: Created: latency-svc-kw7ht +Oct 27 15:14:22.660: INFO: Created: latency-svc-bh7t9 +Oct 27 15:14:22.661: INFO: Got endpoints: latency-svc-cwggw [751.657297ms] +Oct 27 15:14:22.702: INFO: Created: latency-svc-q9r79 +Oct 27 15:14:22.708: INFO: Got endpoints: latency-svc-cnbgc [750.498224ms] +Oct 27 15:14:22.756: INFO: Created: latency-svc-t5chd +Oct 27 15:14:22.757: INFO: Got endpoints: latency-svc-cv24d [748.309092ms] +Oct 27 15:14:22.803: INFO: Created: latency-svc-8f7ck +Oct 27 15:14:22.807: INFO: Got endpoints: latency-svc-ktlfh [748.099555ms] +Oct 27 15:14:22.853: INFO: Created: latency-svc-qwtqk +Oct 27 15:14:22.857: INFO: Got endpoints: latency-svc-9cldc [749.868415ms] +Oct 27 15:14:22.902: INFO: Created: latency-svc-2nj8k +Oct 27 15:14:22.907: INFO: Got endpoints: latency-svc-nwld2 [750.193477ms] +Oct 27 15:14:22.952: INFO: Created: latency-svc-l6tsm +Oct 27 15:14:22.956: INFO: Got endpoints: latency-svc-gt6zk [740.470038ms] +Oct 27 15:14:23.003: INFO: Created: latency-svc-pbr8z +Oct 27 15:14:23.007: INFO: Got endpoints: latency-svc-vhqzr [748.743009ms] +Oct 27 15:14:23.052: INFO: Created: latency-svc-nv8d8 +Oct 27 15:14:23.059: INFO: Got endpoints: latency-svc-bwlmf [752.511029ms] +Oct 27 15:14:23.102: INFO: Created: latency-svc-tqq47 +Oct 27 15:14:23.106: INFO: Got endpoints: latency-svc-r4z7z [749.651062ms] +Oct 27 15:14:23.155: INFO: Created: latency-svc-whqzk +Oct 27 15:14:23.158: INFO: Got endpoints: latency-svc-vr8x8 [748.179862ms] +Oct 27 15:14:23.203: INFO: Created: latency-svc-cwsh9 +Oct 27 15:14:23.206: INFO: Got endpoints: latency-svc-w2q8g [652.471346ms] +Oct 27 15:14:23.254: INFO: Created: latency-svc-gj6cv +Oct 27 15:14:23.259: INFO: Got endpoints: latency-svc-kw7ht [704.233821ms] +Oct 27 15:14:23.302: INFO: Created: latency-svc-xh5fp +Oct 27 15:14:23.306: INFO: Got endpoints: latency-svc-bh7t9 [749.050875ms] +Oct 27 15:14:23.356: INFO: Created: latency-svc-n4xfj +Oct 27 15:14:23.358: INFO: Got endpoints: latency-svc-q9r79 [751.604634ms] +Oct 27 15:14:23.402: INFO: Created: latency-svc-9tp96 +Oct 27 15:14:23.409: INFO: Got endpoints: latency-svc-t5chd [748.065453ms] +Oct 27 15:14:23.456: INFO: Created: latency-svc-j7c99 +Oct 27 15:14:23.457: INFO: Got endpoints: latency-svc-8f7ck [748.999913ms] +Oct 27 15:14:23.505: INFO: Created: latency-svc-thh72 +Oct 27 15:14:23.507: INFO: Got endpoints: latency-svc-qwtqk [750.233417ms] +Oct 27 15:14:23.552: INFO: Created: latency-svc-rfnjn +Oct 27 15:14:23.558: INFO: Got endpoints: latency-svc-2nj8k [750.931735ms] +Oct 27 15:14:23.603: INFO: Created: latency-svc-c555m +Oct 27 15:14:23.610: INFO: Got endpoints: latency-svc-l6tsm [752.704688ms] +Oct 27 15:14:23.653: INFO: Created: latency-svc-8f2q9 +Oct 27 15:14:23.660: INFO: Got endpoints: latency-svc-pbr8z [753.252738ms] +Oct 27 15:14:23.706: INFO: Created: latency-svc-lfj8w +Oct 27 15:14:23.709: INFO: Got endpoints: latency-svc-nv8d8 [752.125587ms] +Oct 27 15:14:23.755: INFO: Created: latency-svc-skskk +Oct 27 15:14:23.757: INFO: Got endpoints: latency-svc-tqq47 [749.754106ms] +Oct 27 15:14:23.804: INFO: Created: latency-svc-n2d68 +Oct 27 15:14:23.807: INFO: Got endpoints: latency-svc-whqzk [747.875343ms] +Oct 27 15:14:23.854: INFO: Created: latency-svc-q2mvc +Oct 27 15:14:23.856: INFO: Got endpoints: latency-svc-cwsh9 [750.105892ms] +Oct 27 15:14:23.903: INFO: Created: latency-svc-x76b2 +Oct 27 15:14:23.907: INFO: Got endpoints: latency-svc-gj6cv [748.416024ms] +Oct 27 15:14:23.952: INFO: Created: latency-svc-zplbm +Oct 27 15:14:23.958: INFO: Got endpoints: latency-svc-xh5fp [751.55484ms] +Oct 27 15:14:24.006: INFO: Created: latency-svc-hm4jp +Oct 27 15:14:24.006: INFO: Got endpoints: latency-svc-n4xfj [747.818976ms] +Oct 27 15:14:24.054: INFO: Created: latency-svc-hrg5p +Oct 27 15:14:24.056: INFO: Got endpoints: latency-svc-9tp96 [749.705781ms] +Oct 27 15:14:24.102: INFO: Created: latency-svc-8vx2t +Oct 27 15:14:24.108: INFO: Got endpoints: latency-svc-j7c99 [749.907137ms] +Oct 27 15:14:24.153: INFO: Created: latency-svc-b6s9d +Oct 27 15:14:24.157: INFO: Got endpoints: latency-svc-thh72 [748.536897ms] +Oct 27 15:14:24.204: INFO: Created: latency-svc-lcxxb +Oct 27 15:14:24.208: INFO: Got endpoints: latency-svc-rfnjn [751.139505ms] +Oct 27 15:14:24.254: INFO: Created: latency-svc-n4ls7 +Oct 27 15:14:24.260: INFO: Got endpoints: latency-svc-c555m [753.014985ms] +Oct 27 15:14:24.307: INFO: Created: latency-svc-wtmx4 +Oct 27 15:14:24.307: INFO: Got endpoints: latency-svc-8f2q9 [748.821889ms] +Oct 27 15:14:24.359: INFO: Created: latency-svc-vqq44 +Oct 27 15:14:24.359: INFO: Got endpoints: latency-svc-lfj8w [749.370579ms] +Oct 27 15:14:24.403: INFO: Created: latency-svc-mwb6k +Oct 27 15:14:24.407: INFO: Got endpoints: latency-svc-skskk [746.480693ms] +Oct 27 15:14:24.455: INFO: Created: latency-svc-gdgsf +Oct 27 15:14:24.457: INFO: Got endpoints: latency-svc-n2d68 [747.853869ms] +Oct 27 15:14:24.502: INFO: Created: latency-svc-js9zk +Oct 27 15:14:24.507: INFO: Got endpoints: latency-svc-q2mvc [750.085398ms] +Oct 27 15:14:24.602: INFO: Got endpoints: latency-svc-x76b2 [795.141814ms] +Oct 27 15:14:24.700: INFO: Got endpoints: latency-svc-zplbm [843.646371ms] +Oct 27 15:14:24.706: INFO: Created: latency-svc-2sdzt +Oct 27 15:14:24.710: INFO: Created: latency-svc-kqs44 +Oct 27 15:14:24.711: INFO: Got endpoints: latency-svc-hm4jp [803.512719ms] +Oct 27 15:14:24.805: INFO: Got endpoints: latency-svc-hrg5p [846.955277ms] +Oct 27 15:14:24.805: INFO: Got endpoints: latency-svc-8vx2t [798.592196ms] +Oct 27 15:14:24.807: INFO: Got endpoints: latency-svc-b6s9d [751.357383ms] +Oct 27 15:14:24.901: INFO: Created: latency-svc-k79ks +Oct 27 15:14:24.909: INFO: Created: latency-svc-ktsp4 +Oct 27 15:14:24.909: INFO: Got endpoints: latency-svc-lcxxb [800.908002ms] +Oct 27 15:14:24.909: INFO: Got endpoints: latency-svc-n4ls7 [751.975168ms] +Oct 27 15:14:24.914: INFO: Created: latency-svc-ln6s7 +Oct 27 15:14:24.919: INFO: Created: latency-svc-9fkxt +Oct 27 15:14:25.000: INFO: Created: latency-svc-dkj2g +Oct 27 15:14:25.004: INFO: Got endpoints: latency-svc-wtmx4 [796.268954ms] +Oct 27 15:14:25.004: INFO: Created: latency-svc-jjmsl +Oct 27 15:14:25.006: INFO: Got endpoints: latency-svc-vqq44 [745.978559ms] +Oct 27 15:14:25.014: INFO: Created: latency-svc-ktn6m +Oct 27 15:14:25.018: INFO: Created: latency-svc-bm776 +Oct 27 15:14:25.057: INFO: Got endpoints: latency-svc-mwb6k [750.103149ms] +Oct 27 15:14:25.100: INFO: Created: latency-svc-7g8qt +Oct 27 15:14:25.105: INFO: Created: latency-svc-p5fln +Oct 27 15:14:25.107: INFO: Got endpoints: latency-svc-gdgsf [748.400243ms] +Oct 27 15:14:25.154: INFO: Created: latency-svc-fcx4t +Oct 27 15:14:25.158: INFO: Got endpoints: latency-svc-js9zk [751.363355ms] +Oct 27 15:14:25.203: INFO: Created: latency-svc-hxmtl +Oct 27 15:14:25.207: INFO: Got endpoints: latency-svc-2sdzt [750.546273ms] +Oct 27 15:14:25.253: INFO: Created: latency-svc-zb5nw +Oct 27 15:14:25.257: INFO: Got endpoints: latency-svc-kqs44 [749.770935ms] +Oct 27 15:14:25.304: INFO: Created: latency-svc-xwb44 +Oct 27 15:14:25.306: INFO: Got endpoints: latency-svc-k79ks [606.311344ms] +Oct 27 15:14:25.355: INFO: Created: latency-svc-rwm4w +Oct 27 15:14:25.356: INFO: Got endpoints: latency-svc-ktsp4 [753.559468ms] +Oct 27 15:14:25.404: INFO: Created: latency-svc-8qxxc +Oct 27 15:14:25.407: INFO: Got endpoints: latency-svc-ln6s7 [599.283409ms] +Oct 27 15:14:25.453: INFO: Created: latency-svc-h2474 +Oct 27 15:14:25.458: INFO: Got endpoints: latency-svc-9fkxt [747.125945ms] +Oct 27 15:14:25.503: INFO: Created: latency-svc-s6mkb +Oct 27 15:14:25.507: INFO: Got endpoints: latency-svc-dkj2g [702.239797ms] +Oct 27 15:14:25.553: INFO: Created: latency-svc-j5xct +Oct 27 15:14:25.557: INFO: Got endpoints: latency-svc-jjmsl [751.499257ms] +Oct 27 15:14:25.607: INFO: Created: latency-svc-qmngh +Oct 27 15:14:25.608: INFO: Got endpoints: latency-svc-ktn6m [698.567332ms] +Oct 27 15:14:25.652: INFO: Created: latency-svc-q6gkb +Oct 27 15:14:25.656: INFO: Got endpoints: latency-svc-bm776 [747.17865ms] +Oct 27 15:14:25.703: INFO: Created: latency-svc-2f5rx +Oct 27 15:14:25.708: INFO: Got endpoints: latency-svc-7g8qt [703.785076ms] +Oct 27 15:14:25.751: INFO: Created: latency-svc-7fdt5 +Oct 27 15:14:25.757: INFO: Got endpoints: latency-svc-p5fln [751.052515ms] +Oct 27 15:14:25.803: INFO: Created: latency-svc-wqpgj +Oct 27 15:14:25.807: INFO: Got endpoints: latency-svc-fcx4t [749.665451ms] +Oct 27 15:14:25.855: INFO: Created: latency-svc-g5glc +Oct 27 15:14:25.856: INFO: Got endpoints: latency-svc-hxmtl [748.499998ms] +Oct 27 15:14:25.903: INFO: Created: latency-svc-zplpx +Oct 27 15:14:25.907: INFO: Got endpoints: latency-svc-zb5nw [748.722018ms] +Oct 27 15:14:25.956: INFO: Created: latency-svc-9dzmz +Oct 27 15:14:25.957: INFO: Got endpoints: latency-svc-xwb44 [749.871429ms] +Oct 27 15:14:26.002: INFO: Created: latency-svc-lnhdl +Oct 27 15:14:26.008: INFO: Got endpoints: latency-svc-rwm4w [751.008472ms] +Oct 27 15:14:26.053: INFO: Created: latency-svc-bzbc7 +Oct 27 15:14:26.059: INFO: Got endpoints: latency-svc-8qxxc [752.76921ms] +Oct 27 15:14:26.107: INFO: Got endpoints: latency-svc-h2474 [750.943467ms] +Oct 27 15:14:26.114: INFO: Created: latency-svc-ht4sk +Oct 27 15:14:26.154: INFO: Created: latency-svc-4p487 +Oct 27 15:14:26.156: INFO: Got endpoints: latency-svc-s6mkb [749.166671ms] +Oct 27 15:14:26.202: INFO: Created: latency-svc-vrgz9 +Oct 27 15:14:26.207: INFO: Got endpoints: latency-svc-j5xct [749.36482ms] +Oct 27 15:14:26.251: INFO: Created: latency-svc-xkf54 +Oct 27 15:14:26.257: INFO: Got endpoints: latency-svc-qmngh [749.414778ms] +Oct 27 15:14:26.303: INFO: Created: latency-svc-nwj9c +Oct 27 15:14:26.306: INFO: Got endpoints: latency-svc-q6gkb [749.597174ms] +Oct 27 15:14:26.352: INFO: Created: latency-svc-lfc2w +Oct 27 15:14:26.357: INFO: Got endpoints: latency-svc-2f5rx [748.833372ms] +Oct 27 15:14:26.401: INFO: Created: latency-svc-xfzwf +Oct 27 15:14:26.409: INFO: Got endpoints: latency-svc-7fdt5 [752.16463ms] +Oct 27 15:14:26.453: INFO: Created: latency-svc-jnjwc +Oct 27 15:14:26.459: INFO: Got endpoints: latency-svc-wqpgj [750.737189ms] +Oct 27 15:14:26.504: INFO: Created: latency-svc-h4jmq +Oct 27 15:14:26.506: INFO: Got endpoints: latency-svc-g5glc [749.393326ms] +Oct 27 15:14:26.554: INFO: Created: latency-svc-tk89l +Oct 27 15:14:26.556: INFO: Got endpoints: latency-svc-zplpx [749.341321ms] +Oct 27 15:14:26.602: INFO: Created: latency-svc-zprdk +Oct 27 15:14:26.608: INFO: Got endpoints: latency-svc-9dzmz [751.827006ms] +Oct 27 15:14:26.653: INFO: Created: latency-svc-7vwd5 +Oct 27 15:14:26.657: INFO: Got endpoints: latency-svc-lnhdl [750.166332ms] +Oct 27 15:14:26.703: INFO: Created: latency-svc-vz9wn +Oct 27 15:14:26.708: INFO: Got endpoints: latency-svc-bzbc7 [750.449703ms] +Oct 27 15:14:26.752: INFO: Created: latency-svc-zw99q +Oct 27 15:14:26.756: INFO: Got endpoints: latency-svc-ht4sk [748.536816ms] +Oct 27 15:14:26.803: INFO: Created: latency-svc-wgfsq +Oct 27 15:14:26.807: INFO: Got endpoints: latency-svc-4p487 [747.739982ms] +Oct 27 15:14:26.853: INFO: Created: latency-svc-555n8 +Oct 27 15:14:26.856: INFO: Got endpoints: latency-svc-vrgz9 [749.519673ms] +Oct 27 15:14:26.902: INFO: Created: latency-svc-npsb4 +Oct 27 15:14:26.908: INFO: Got endpoints: latency-svc-xkf54 [752.315468ms] +Oct 27 15:14:26.951: INFO: Created: latency-svc-l8c7r +Oct 27 15:14:26.957: INFO: Got endpoints: latency-svc-nwj9c [749.352497ms] +Oct 27 15:14:27.006: INFO: Created: latency-svc-jtpwm +Oct 27 15:14:27.007: INFO: Got endpoints: latency-svc-lfc2w [750.544879ms] +Oct 27 15:14:27.054: INFO: Created: latency-svc-wzqxl +Oct 27 15:14:27.057: INFO: Got endpoints: latency-svc-xfzwf [750.375618ms] +Oct 27 15:14:27.104: INFO: Created: latency-svc-mdqzn +Oct 27 15:14:27.107: INFO: Got endpoints: latency-svc-jnjwc [749.640887ms] +Oct 27 15:14:27.152: INFO: Created: latency-svc-jnwv8 +Oct 27 15:14:27.203: INFO: Got endpoints: latency-svc-h4jmq [794.307355ms] +Oct 27 15:14:27.208: INFO: Created: latency-svc-kk4mk +Oct 27 15:14:27.208: INFO: Got endpoints: latency-svc-tk89l [748.846466ms] +Oct 27 15:14:27.257: INFO: Got endpoints: latency-svc-zprdk [750.48268ms] +Oct 27 15:14:27.298: INFO: Created: latency-svc-5db6t +Oct 27 15:14:27.309: INFO: Created: latency-svc-h7s78 +Oct 27 15:14:27.309: INFO: Got endpoints: latency-svc-7vwd5 [752.619624ms] +Oct 27 15:14:27.353: INFO: Created: latency-svc-6q9dw +Oct 27 15:14:27.358: INFO: Got endpoints: latency-svc-vz9wn [750.32602ms] +Oct 27 15:14:27.408: INFO: Created: latency-svc-sgcvj +Oct 27 15:14:27.408: INFO: Got endpoints: latency-svc-zw99q [750.835602ms] +Oct 27 15:14:27.453: INFO: Created: latency-svc-85vwc +Oct 27 15:14:27.457: INFO: Got endpoints: latency-svc-wgfsq [749.026552ms] +Oct 27 15:14:27.503: INFO: Created: latency-svc-lr7mk +Oct 27 15:14:27.507: INFO: Got endpoints: latency-svc-555n8 [750.64217ms] +Oct 27 15:14:27.552: INFO: Created: latency-svc-wmlst +Oct 27 15:14:27.558: INFO: Got endpoints: latency-svc-npsb4 [750.618688ms] +Oct 27 15:14:27.602: INFO: Created: latency-svc-kztnc +Oct 27 15:14:27.608: INFO: Got endpoints: latency-svc-l8c7r [751.088405ms] +Oct 27 15:14:27.653: INFO: Created: latency-svc-rdwnv +Oct 27 15:14:27.660: INFO: Got endpoints: latency-svc-jtpwm [751.248053ms] +Oct 27 15:14:27.702: INFO: Created: latency-svc-hkrvc +Oct 27 15:14:27.708: INFO: Got endpoints: latency-svc-wzqxl [751.321302ms] +Oct 27 15:14:27.756: INFO: Created: latency-svc-dfznm +Oct 27 15:14:27.756: INFO: Got endpoints: latency-svc-mdqzn [749.0016ms] +Oct 27 15:14:27.804: INFO: Created: latency-svc-zks9q +Oct 27 15:14:27.806: INFO: Got endpoints: latency-svc-jnwv8 [749.105236ms] +Oct 27 15:14:27.851: INFO: Created: latency-svc-4jlqh +Oct 27 15:14:27.857: INFO: Got endpoints: latency-svc-kk4mk [750.443332ms] +Oct 27 15:14:27.902: INFO: Created: latency-svc-b2rtj +Oct 27 15:14:27.907: INFO: Got endpoints: latency-svc-5db6t [703.658643ms] +Oct 27 15:14:27.953: INFO: Created: latency-svc-5q4qc +Oct 27 15:14:27.959: INFO: Got endpoints: latency-svc-h7s78 [750.830667ms] +Oct 27 15:14:28.006: INFO: Got endpoints: latency-svc-6q9dw [749.414353ms] +Oct 27 15:14:28.058: INFO: Got endpoints: latency-svc-sgcvj [748.625795ms] +Oct 27 15:14:28.109: INFO: Got endpoints: latency-svc-85vwc [750.551839ms] +Oct 27 15:14:28.157: INFO: Got endpoints: latency-svc-lr7mk [749.392116ms] +Oct 27 15:14:28.208: INFO: Got endpoints: latency-svc-wmlst [751.284457ms] +Oct 27 15:14:28.257: INFO: Got endpoints: latency-svc-kztnc [749.571357ms] +Oct 27 15:14:28.306: INFO: Got endpoints: latency-svc-rdwnv [748.581591ms] +Oct 27 15:14:28.357: INFO: Got endpoints: latency-svc-hkrvc [749.54948ms] +Oct 27 15:14:28.407: INFO: Got endpoints: latency-svc-dfznm [747.500081ms] +Oct 27 15:14:28.459: INFO: Got endpoints: latency-svc-zks9q [750.923297ms] +Oct 27 15:14:28.509: INFO: Got endpoints: latency-svc-4jlqh [752.304565ms] +Oct 27 15:14:28.564: INFO: Got endpoints: latency-svc-b2rtj [758.35401ms] +Oct 27 15:14:28.607: INFO: Got endpoints: latency-svc-5q4qc [749.722589ms] +Oct 27 15:14:28.607: INFO: Latencies: [98.978227ms 100.178944ms 106.064787ms 107.839158ms 109.379648ms 112.673469ms 112.774997ms 113.499902ms 114.473563ms 120.248343ms 124.866833ms 128.752977ms 131.225396ms 138.011578ms 140.007531ms 148.268925ms 180.365141ms 192.119122ms 196.530677ms 196.96807ms 197.670037ms 198.085973ms 198.411284ms 253.644281ms 255.248601ms 258.233467ms 261.529563ms 262.141078ms 262.677819ms 295.337846ms 341.543962ms 350.966015ms 365.818465ms 366.246235ms 366.267857ms 367.920029ms 374.711715ms 379.059277ms 385.017046ms 431.805591ms 434.690345ms 435.30911ms 438.939744ms 439.319346ms 447.668977ms 473.890848ms 519.191242ms 564.571592ms 599.283409ms 606.311344ms 606.516033ms 607.839705ms 631.42799ms 652.471346ms 680.816286ms 698.567332ms 702.239797ms 703.658643ms 703.785076ms 704.233821ms 705.551272ms 728.992966ms 740.470038ms 743.456585ms 745.978559ms 746.480693ms 747.125945ms 747.17865ms 747.500081ms 747.739982ms 747.818976ms 747.853869ms 747.875343ms 748.065453ms 748.099555ms 748.107708ms 748.179862ms 748.199097ms 748.309092ms 748.311081ms 748.32643ms 748.400243ms 748.416024ms 748.499998ms 748.536816ms 748.536897ms 748.581591ms 748.625795ms 748.722018ms 748.743009ms 748.821889ms 748.832707ms 748.833372ms 748.846466ms 748.994436ms 748.999913ms 749.0016ms 749.026552ms 749.050875ms 749.105236ms 749.166671ms 749.341321ms 749.352497ms 749.36482ms 749.370579ms 749.392116ms 749.393326ms 749.414353ms 749.414778ms 749.444923ms 749.519673ms 749.54948ms 749.571357ms 749.597174ms 749.640887ms 749.651062ms 749.665451ms 749.693257ms 749.705781ms 749.722589ms 749.754106ms 749.770935ms 749.832444ms 749.868415ms 749.871429ms 749.907137ms 750.085398ms 750.103149ms 750.105892ms 750.166332ms 750.166917ms 750.183491ms 750.193477ms 750.233417ms 750.25918ms 750.286545ms 750.32602ms 750.375618ms 750.411843ms 750.443332ms 750.449703ms 750.471279ms 750.48268ms 750.498224ms 750.529803ms 750.540699ms 750.544879ms 750.546273ms 750.551839ms 750.618688ms 750.64217ms 750.694882ms 750.737189ms 750.830667ms 750.835602ms 750.923297ms 750.931735ms 750.943467ms 751.008472ms 751.040917ms 751.052515ms 751.088405ms 751.139505ms 751.248053ms 751.284457ms 751.321302ms 751.357383ms 751.363355ms 751.499257ms 751.55484ms 751.604634ms 751.634366ms 751.657297ms 751.827006ms 751.975168ms 752.047319ms 752.125587ms 752.16463ms 752.203941ms 752.304565ms 752.315468ms 752.388047ms 752.511029ms 752.619624ms 752.704688ms 752.76921ms 753.014985ms 753.217448ms 753.252738ms 753.559468ms 757.152264ms 758.35401ms 794.307355ms 795.141814ms 796.268954ms 798.592196ms 800.908002ms 803.512719ms 843.646371ms 846.955277ms] +Oct 27 15:14:28.607: INFO: 50 %ile: 749.166671ms +Oct 27 15:14:28.607: INFO: 90 %ile: 752.315468ms +Oct 27 15:14:28.607: INFO: 99 %ile: 843.646371ms +Oct 27 15:14:28.607: INFO: Total sample count: 200 +[AfterEach] [sig-network] Service endpoints latency + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:14:28.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svc-latency-3282" for this suite. +•{"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":346,"completed":259,"skipped":4300,"failed":0} + +------------------------------ +[sig-network] Services + should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:14:28.790: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-1084 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating service in namespace services-1084 +STEP: creating service affinity-nodeport-transition in namespace services-1084 +STEP: creating replication controller affinity-nodeport-transition in namespace services-1084 +I1027 15:14:29.710507 5725 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-1084, replica count: 3 +I1027 15:14:32.811932 5725 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Oct 27 15:14:33.173: INFO: Creating new exec pod +Oct 27 15:14:36.630: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-1084 exec execpod-affinityrv57p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' +Oct 27 15:14:37.684: INFO: stderr: "+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n+ echo hostName\n" +Oct 27 15:14:37.684: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 15:14:37.685: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-1084 exec execpod-affinityrv57p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.67.0.203 80' +Oct 27 15:14:38.740: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 100.67.0.203 80\nConnection to 100.67.0.203 80 port [tcp/http] succeeded!\n" +Oct 27 15:14:38.740: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 15:14:38.740: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-1084 exec execpod-affinityrv57p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.250.28.25 30176' +Oct 27 15:14:39.848: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.250.28.25 30176\nConnection to 10.250.28.25 30176 port [tcp/*] succeeded!\n" +Oct 27 15:14:39.848: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 15:14:39.848: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-1084 exec execpod-affinityrv57p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.250.9.48 30176' +Oct 27 15:14:40.906: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.250.9.48 30176\nConnection to 10.250.9.48 30176 port [tcp/*] succeeded!\n" +Oct 27 15:14:40.906: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 15:14:41.088: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-1084 exec execpod-affinityrv57p -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.250.28.25:30176/ ; done' +Oct 27 15:14:42.187: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.28.25:30176/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.28.25:30176/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.28.25:30176/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.28.25:30176/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.28.25:30176/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.28.25:30176/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.28.25:30176/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.28.25:30176/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.28.25:30176/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.28.25:30176/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.28.25:30176/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.28.25:30176/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.28.25:30176/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.28.25:30176/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.28.25:30176/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.28.25:30176/\n" +Oct 27 15:14:42.187: INFO: stdout: "\naffinity-nodeport-transition-5kvh6\naffinity-nodeport-transition-tpdz9\naffinity-nodeport-transition-qtvdz\naffinity-nodeport-transition-5kvh6\naffinity-nodeport-transition-qtvdz\naffinity-nodeport-transition-qtvdz\naffinity-nodeport-transition-5kvh6\naffinity-nodeport-transition-tpdz9\naffinity-nodeport-transition-tpdz9\naffinity-nodeport-transition-tpdz9\naffinity-nodeport-transition-qtvdz\naffinity-nodeport-transition-tpdz9\naffinity-nodeport-transition-5kvh6\naffinity-nodeport-transition-qtvdz\naffinity-nodeport-transition-qtvdz\naffinity-nodeport-transition-qtvdz" +Oct 27 15:14:42.187: INFO: Received response from host: affinity-nodeport-transition-5kvh6 +Oct 27 15:14:42.187: INFO: Received response from host: affinity-nodeport-transition-tpdz9 +Oct 27 15:14:42.187: INFO: Received response from host: affinity-nodeport-transition-qtvdz +Oct 27 15:14:42.187: INFO: Received response from host: affinity-nodeport-transition-5kvh6 +Oct 27 15:14:42.187: INFO: Received response from host: affinity-nodeport-transition-qtvdz +Oct 27 15:14:42.187: INFO: Received response from host: affinity-nodeport-transition-qtvdz +Oct 27 15:14:42.187: INFO: Received response from host: affinity-nodeport-transition-5kvh6 +Oct 27 15:14:42.187: INFO: Received response from host: affinity-nodeport-transition-tpdz9 +Oct 27 15:14:42.187: INFO: Received response from host: affinity-nodeport-transition-tpdz9 +Oct 27 15:14:42.187: INFO: Received response from host: affinity-nodeport-transition-tpdz9 +Oct 27 15:14:42.187: INFO: Received response from host: affinity-nodeport-transition-qtvdz +Oct 27 15:14:42.187: INFO: Received response from host: affinity-nodeport-transition-tpdz9 +Oct 27 15:14:42.187: INFO: Received response from host: affinity-nodeport-transition-5kvh6 +Oct 27 15:14:42.187: INFO: Received response from host: affinity-nodeport-transition-qtvdz +Oct 27 15:14:42.187: INFO: Received response from host: affinity-nodeport-transition-qtvdz +Oct 27 15:14:42.187: INFO: Received response from host: affinity-nodeport-transition-qtvdz +Oct 27 15:14:42.371: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-1084 exec execpod-affinityrv57p -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.250.28.25:30176/ ; done' +Oct 27 15:14:43.515: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.28.25:30176/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.28.25:30176/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.28.25:30176/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.28.25:30176/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.28.25:30176/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.28.25:30176/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.28.25:30176/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.28.25:30176/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.28.25:30176/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.28.25:30176/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.28.25:30176/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.28.25:30176/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.28.25:30176/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.28.25:30176/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.28.25:30176/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.28.25:30176/\n" +Oct 27 15:14:43.515: INFO: stdout: "\naffinity-nodeport-transition-qtvdz\naffinity-nodeport-transition-qtvdz\naffinity-nodeport-transition-qtvdz\naffinity-nodeport-transition-qtvdz\naffinity-nodeport-transition-qtvdz\naffinity-nodeport-transition-qtvdz\naffinity-nodeport-transition-qtvdz\naffinity-nodeport-transition-qtvdz\naffinity-nodeport-transition-qtvdz\naffinity-nodeport-transition-qtvdz\naffinity-nodeport-transition-qtvdz\naffinity-nodeport-transition-qtvdz\naffinity-nodeport-transition-qtvdz\naffinity-nodeport-transition-qtvdz\naffinity-nodeport-transition-qtvdz\naffinity-nodeport-transition-qtvdz" +Oct 27 15:14:43.515: INFO: Received response from host: affinity-nodeport-transition-qtvdz +Oct 27 15:14:43.515: INFO: Received response from host: affinity-nodeport-transition-qtvdz +Oct 27 15:14:43.515: INFO: Received response from host: affinity-nodeport-transition-qtvdz +Oct 27 15:14:43.515: INFO: Received response from host: affinity-nodeport-transition-qtvdz +Oct 27 15:14:43.515: INFO: Received response from host: affinity-nodeport-transition-qtvdz +Oct 27 15:14:43.515: INFO: Received response from host: affinity-nodeport-transition-qtvdz +Oct 27 15:14:43.515: INFO: Received response from host: affinity-nodeport-transition-qtvdz +Oct 27 15:14:43.515: INFO: Received response from host: affinity-nodeport-transition-qtvdz +Oct 27 15:14:43.515: INFO: Received response from host: affinity-nodeport-transition-qtvdz +Oct 27 15:14:43.515: INFO: Received response from host: affinity-nodeport-transition-qtvdz +Oct 27 15:14:43.515: INFO: Received response from host: affinity-nodeport-transition-qtvdz +Oct 27 15:14:43.515: INFO: Received response from host: affinity-nodeport-transition-qtvdz +Oct 27 15:14:43.515: INFO: Received response from host: affinity-nodeport-transition-qtvdz +Oct 27 15:14:43.515: INFO: Received response from host: affinity-nodeport-transition-qtvdz +Oct 27 15:14:43.515: INFO: Received response from host: affinity-nodeport-transition-qtvdz +Oct 27 15:14:43.515: INFO: Received response from host: affinity-nodeport-transition-qtvdz +Oct 27 15:14:43.515: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-1084, will wait for the garbage collector to delete the pods +Oct 27 15:14:43.891: INFO: Deleting ReplicationController affinity-nodeport-transition took: 91.146328ms +Oct 27 15:14:47.291: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 3.400418176s +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:14:49.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-1084" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":346,"completed":260,"skipped":4300,"failed":0} +SSSS +------------------------------ +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + custom resource defaulting for requests and from storage works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:14:49.961: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename custom-resource-definition +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in custom-resource-definition-257 +STEP: Waiting for a default service account to be provisioned in namespace +[It] custom resource defaulting for requests and from storage works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:14:50.708: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:14:53.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "custom-resource-definition-257" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":346,"completed":261,"skipped":4304,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Networking Granular Checks: Pods + should function for intra-pod communication: udp [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Networking + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:14:54.001: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename pod-network-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pod-network-test-3177 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should function for intra-pod communication: udp [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Performing setup for networking test in namespace pod-network-test-3177 +STEP: creating a selector +STEP: Creating the service pods in kubernetes +Oct 27 15:14:54.733: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +Oct 27 15:14:55.301: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:14:57.392: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 15:14:59.393: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 15:15:01.393: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 15:15:03.392: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 15:15:05.393: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 15:15:07.393: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 15:15:09.393: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 15:15:11.393: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 15:15:13.392: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 15:15:15.392: INFO: The status of Pod netserver-0 is Running (Ready = true) +Oct 27 15:15:15.573: INFO: The status of Pod netserver-1 is Running (Ready = true) +STEP: Creating test pods +Oct 27 15:15:18.029: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 +Oct 27 15:15:18.029: INFO: Breadth first check of 100.96.1.67 on host 10.250.28.25... +Oct 27 15:15:18.120: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://100.96.1.68:9080/dial?request=hostname&protocol=udp&host=100.96.1.67&port=8081&tries=1'] Namespace:pod-network-test-3177 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 15:15:18.120: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 15:15:18.813: INFO: Waiting for responses: map[] +Oct 27 15:15:18.813: INFO: reached 100.96.1.67 after 0/1 tries +Oct 27 15:15:18.813: INFO: Breadth first check of 100.96.0.77 on host 10.250.9.48... +Oct 27 15:15:18.903: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://100.96.1.68:9080/dial?request=hostname&protocol=udp&host=100.96.0.77&port=8081&tries=1'] Namespace:pod-network-test-3177 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 15:15:18.903: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 15:15:19.571: INFO: Waiting for responses: map[] +Oct 27 15:15:19.571: INFO: reached 100.96.0.77 after 0/1 tries +Oct 27 15:15:19.571: INFO: Going to retry 0 out of 2 pods.... +[AfterEach] [sig-network] Networking + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:15:19.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pod-network-test-3177" for this suite. +•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":346,"completed":262,"skipped":4334,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Watchers + should observe an object deletion if it stops meeting the requirements of the selector [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:15:19.844: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename watch +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in watch-8081 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a watch on configmaps with a certain label +STEP: creating a new configmap +STEP: modifying the configmap once +STEP: changing the label value of the configmap +STEP: Expecting to observe a delete notification for the watched object +Oct 27 15:15:21.118: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8081 4fd0fe6f-c050-4d99-aa44-84b56c8d8f84 36145 0 2021-10-27 15:15:20 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-10-27 15:15:20 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 27 15:15:21.118: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8081 4fd0fe6f-c050-4d99-aa44-84b56c8d8f84 36146 0 2021-10-27 15:15:20 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-10-27 15:15:20 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 27 15:15:21.118: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8081 4fd0fe6f-c050-4d99-aa44-84b56c8d8f84 36147 0 2021-10-27 15:15:20 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-10-27 15:15:20 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: modifying the configmap a second time +STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements +STEP: changing the label value of the configmap back +STEP: modifying the configmap a third time +STEP: deleting the configmap +STEP: Expecting to observe an add notification for the watched object when the label value was restored +Oct 27 15:15:31.753: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8081 4fd0fe6f-c050-4d99-aa44-84b56c8d8f84 36240 0 2021-10-27 15:15:20 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-10-27 15:15:20 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 27 15:15:31.753: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8081 4fd0fe6f-c050-4d99-aa44-84b56c8d8f84 36241 0 2021-10-27 15:15:20 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-10-27 15:15:20 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 27 15:15:31.753: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8081 4fd0fe6f-c050-4d99-aa44-84b56c8d8f84 36243 0 2021-10-27 15:15:20 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-10-27 15:15:20 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} +[AfterEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:15:31.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "watch-8081" for this suite. +•{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":346,"completed":263,"skipped":4362,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook + should execute poststart exec hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Container Lifecycle Hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:15:32.024: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-lifecycle-hook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-lifecycle-hook-5693 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] when create a pod with lifecycle hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 +STEP: create the container to handle the HTTPGet hook request. +Oct 27 15:15:32.943: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:15:35.033: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) +[It] should execute poststart exec hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the pod with lifecycle hook +Oct 27 15:15:35.308: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:15:37.399: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = true) +STEP: check poststart hook +STEP: delete the pod with lifecycle hook +Oct 27 15:15:37.716: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Oct 27 15:15:37.807: INFO: Pod pod-with-poststart-exec-hook still exists +Oct 27 15:15:39.807: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Oct 27 15:15:39.898: INFO: Pod pod-with-poststart-exec-hook still exists +Oct 27 15:15:41.807: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Oct 27 15:15:41.898: INFO: Pod pod-with-poststart-exec-hook no longer exists +[AfterEach] [sig-node] Container Lifecycle Hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:15:41.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-lifecycle-hook-5693" for this suite. +•{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":346,"completed":264,"skipped":4405,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Namespaces [Serial] + should patch a Namespace [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:15:42.247: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename namespaces +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in namespaces-3072 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should patch a Namespace [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a Namespace +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nspatchtest-807d5f31-7ad8-4436-a9df-11b704a4850a-3647 +STEP: patching the Namespace +STEP: get the Namespace and ensuring it has the label +[AfterEach] [sig-api-machinery] Namespaces [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:15:43.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "namespaces-3072" for this suite. +STEP: Destroying namespace "nspatchtest-807d5f31-7ad8-4436-a9df-11b704a4850a-3647" for this suite. +•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":346,"completed":265,"skipped":4430,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-node] Docker Containers + should use the image defaults if command and args are blank [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Docker Containers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:15:43.995: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename containers +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in containers-5617 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-node] Docker Containers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:15:47.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "containers-5617" for this suite. +•{"msg":"PASSED [sig-node] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":346,"completed":266,"skipped":4442,"failed":0} +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:15:47.413: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-9840 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name configmap-test-volume-080cf2d8-5480-4b04-a04f-a77fb0bf7213 +STEP: Creating a pod to test consume configMaps +Oct 27 15:15:48.378: INFO: Waiting up to 5m0s for pod "pod-configmaps-14dcaa62-928a-457a-9b14-6b0c9e9e013b" in namespace "configmap-9840" to be "Succeeded or Failed" +Oct 27 15:15:48.468: INFO: Pod "pod-configmaps-14dcaa62-928a-457a-9b14-6b0c9e9e013b": Phase="Pending", Reason="", readiness=false. Elapsed: 90.358998ms +Oct 27 15:15:50.560: INFO: Pod "pod-configmaps-14dcaa62-928a-457a-9b14-6b0c9e9e013b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.182191285s +STEP: Saw pod success +Oct 27 15:15:50.560: INFO: Pod "pod-configmaps-14dcaa62-928a-457a-9b14-6b0c9e9e013b" satisfied condition "Succeeded or Failed" +Oct 27 15:15:50.651: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod pod-configmaps-14dcaa62-928a-457a-9b14-6b0c9e9e013b container configmap-volume-test: +STEP: delete the pod +Oct 27 15:15:50.842: INFO: Waiting for pod pod-configmaps-14dcaa62-928a-457a-9b14-6b0c9e9e013b to disappear +Oct 27 15:15:50.932: INFO: Pod pod-configmaps-14dcaa62-928a-457a-9b14-6b0c9e9e013b no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:15:50.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-9840" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":346,"completed":267,"skipped":4461,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Secrets + should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:15:51.211: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-5396 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secret-namespace-429 +STEP: Creating secret with name secret-test-14bbaa21-8050-4b23-9e53-f5879d1a8d3f +STEP: Creating a pod to test consume secrets +Oct 27 15:15:52.772: INFO: Waiting up to 5m0s for pod "pod-secrets-4a0ccfd5-4f85-41a2-a5ac-de305bb5b2f5" in namespace "secrets-5396" to be "Succeeded or Failed" +Oct 27 15:15:52.862: INFO: Pod "pod-secrets-4a0ccfd5-4f85-41a2-a5ac-de305bb5b2f5": Phase="Pending", Reason="", readiness=false. Elapsed: 90.257424ms +Oct 27 15:15:54.953: INFO: Pod "pod-secrets-4a0ccfd5-4f85-41a2-a5ac-de305bb5b2f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.181148232s +STEP: Saw pod success +Oct 27 15:15:54.953: INFO: Pod "pod-secrets-4a0ccfd5-4f85-41a2-a5ac-de305bb5b2f5" satisfied condition "Succeeded or Failed" +Oct 27 15:15:55.043: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod pod-secrets-4a0ccfd5-4f85-41a2-a5ac-de305bb5b2f5 container secret-volume-test: +STEP: delete the pod +Oct 27 15:15:55.275: INFO: Waiting for pod pod-secrets-4a0ccfd5-4f85-41a2-a5ac-de305bb5b2f5 to disappear +Oct 27 15:15:55.366: INFO: Pod pod-secrets-4a0ccfd5-4f85-41a2-a5ac-de305bb5b2f5 no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:15:55.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-5396" for this suite. +STEP: Destroying namespace "secret-namespace-429" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":346,"completed":268,"skipped":4510,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicationController + should release no longer matching pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:15:55.728: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename replication-controller +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replication-controller-4165 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 +[It] should release no longer matching pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Given a ReplicationController is created +STEP: When the matched label of one of its pods change +Oct 27 15:15:56.643: INFO: Pod name pod-release: Found 1 pods out of 1 +STEP: Then the pod is released +[AfterEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:15:56.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replication-controller-4165" for this suite. +•{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":346,"completed":269,"skipped":4574,"failed":0} +SSS +------------------------------ +[sig-storage] EmptyDir volumes + pod should support shared volumes between containers [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:15:57.099: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-2686 +STEP: Waiting for a default service account to be provisioned in namespace +[It] pod should support shared volumes between containers [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating Pod +STEP: Reading file content from the nginx-container +Oct 27 15:16:00.112: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-2686 PodName:pod-sharedvolume-d43ef39c-e8d0-4a7f-9e90-a4f55997f153 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 15:16:00.112: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 15:16:00.810: INFO: Exec stderr: "" +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:16:00.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-2686" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":346,"completed":270,"skipped":4577,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:16:01.081: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-5752 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name projected-configmap-test-volume-a386fc71-2232-42c8-839a-3d89128001d9 +STEP: Creating a pod to test consume configMaps +Oct 27 15:16:02.002: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-afe9bea8-f54b-4013-a9eb-cf5caa0c9ccc" in namespace "projected-5752" to be "Succeeded or Failed" +Oct 27 15:16:02.093: INFO: Pod "pod-projected-configmaps-afe9bea8-f54b-4013-a9eb-cf5caa0c9ccc": Phase="Pending", Reason="", readiness=false. Elapsed: 90.404801ms +Oct 27 15:16:04.184: INFO: Pod "pod-projected-configmaps-afe9bea8-f54b-4013-a9eb-cf5caa0c9ccc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.181469762s +Oct 27 15:16:06.275: INFO: Pod "pod-projected-configmaps-afe9bea8-f54b-4013-a9eb-cf5caa0c9ccc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.272977014s +STEP: Saw pod success +Oct 27 15:16:06.275: INFO: Pod "pod-projected-configmaps-afe9bea8-f54b-4013-a9eb-cf5caa0c9ccc" satisfied condition "Succeeded or Failed" +Oct 27 15:16:06.366: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod pod-projected-configmaps-afe9bea8-f54b-4013-a9eb-cf5caa0c9ccc container projected-configmap-volume-test: +STEP: delete the pod +Oct 27 15:16:06.594: INFO: Waiting for pod pod-projected-configmaps-afe9bea8-f54b-4013-a9eb-cf5caa0c9ccc to disappear +Oct 27 15:16:06.685: INFO: Pod pod-projected-configmaps-afe9bea8-f54b-4013-a9eb-cf5caa0c9ccc no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:16:06.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-5752" for this suite. +•{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":346,"completed":271,"skipped":4590,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:16:06.955: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-6373 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0666 on node default medium +Oct 27 15:16:07.785: INFO: Waiting up to 5m0s for pod "pod-1fb5fb71-0571-4d76-bdb1-70b0ffb96cb3" in namespace "emptydir-6373" to be "Succeeded or Failed" +Oct 27 15:16:07.875: INFO: Pod "pod-1fb5fb71-0571-4d76-bdb1-70b0ffb96cb3": Phase="Pending", Reason="", readiness=false. Elapsed: 90.39913ms +Oct 27 15:16:09.966: INFO: Pod "pod-1fb5fb71-0571-4d76-bdb1-70b0ffb96cb3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.181304304s +STEP: Saw pod success +Oct 27 15:16:09.966: INFO: Pod "pod-1fb5fb71-0571-4d76-bdb1-70b0ffb96cb3" satisfied condition "Succeeded or Failed" +Oct 27 15:16:10.057: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod pod-1fb5fb71-0571-4d76-bdb1-70b0ffb96cb3 container test-container: +STEP: delete the pod +Oct 27 15:16:10.292: INFO: Waiting for pod pod-1fb5fb71-0571-4d76-bdb1-70b0ffb96cb3 to disappear +Oct 27 15:16:10.382: INFO: Pod pod-1fb5fb71-0571-4d76-bdb1-70b0ffb96cb3 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:16:10.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-6373" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":272,"skipped":4604,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:16:10.653: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-1264 +STEP: Waiting for a default service account to be provisioned in namespace +[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir volume type on tmpfs +Oct 27 15:16:11.485: INFO: Waiting up to 5m0s for pod "pod-33eddd6c-8457-4b79-8e1a-e08cf3b2f2ac" in namespace "emptydir-1264" to be "Succeeded or Failed" +Oct 27 15:16:11.576: INFO: Pod "pod-33eddd6c-8457-4b79-8e1a-e08cf3b2f2ac": Phase="Pending", Reason="", readiness=false. Elapsed: 90.398327ms +Oct 27 15:16:13.667: INFO: Pod "pod-33eddd6c-8457-4b79-8e1a-e08cf3b2f2ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.181788284s +STEP: Saw pod success +Oct 27 15:16:13.667: INFO: Pod "pod-33eddd6c-8457-4b79-8e1a-e08cf3b2f2ac" satisfied condition "Succeeded or Failed" +Oct 27 15:16:13.758: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod pod-33eddd6c-8457-4b79-8e1a-e08cf3b2f2ac container test-container: +STEP: delete the pod +Oct 27 15:16:13.948: INFO: Waiting for pod pod-33eddd6c-8457-4b79-8e1a-e08cf3b2f2ac to disappear +Oct 27 15:16:14.038: INFO: Pod pod-33eddd6c-8457-4b79-8e1a-e08cf3b2f2ac no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:16:14.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-1264" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":273,"skipped":4620,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a configMap. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:16:14.309: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename resourcequota +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-273 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should create a ResourceQuota and capture the life of a configMap. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Counting existing ResourceQuota +STEP: Creating a ResourceQuota +STEP: Ensuring resource quota status is calculated +STEP: Creating a ConfigMap +STEP: Ensuring resource quota status captures configMap creation +STEP: Deleting a ConfigMap +STEP: Ensuring resource quota status released usage +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:16:43.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-273" for this suite. +•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":346,"completed":274,"skipped":4669,"failed":0} + +------------------------------ +[sig-node] Probing container + should have monotonically increasing restart count [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:16:44.048: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-probe +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-2476 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 +[It] should have monotonically increasing restart count [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating pod liveness-ca3e089d-f3ab-4b45-9ad5-b912a7e51da8 in namespace container-probe-2476 +Oct 27 15:16:47.063: INFO: Started pod liveness-ca3e089d-f3ab-4b45-9ad5-b912a7e51da8 in namespace container-probe-2476 +STEP: checking the pod's current state and verifying that restartCount is present +Oct 27 15:16:47.153: INFO: Initial restart count of pod liveness-ca3e089d-f3ab-4b45-9ad5-b912a7e51da8 is 0 +Oct 27 15:17:08.159: INFO: Restart count of pod container-probe-2476/liveness-ca3e089d-f3ab-4b45-9ad5-b912a7e51da8 is now 1 (21.006027926s elapsed) +Oct 27 15:17:26.982: INFO: Restart count of pod container-probe-2476/liveness-ca3e089d-f3ab-4b45-9ad5-b912a7e51da8 is now 2 (39.828566502s elapsed) +Oct 27 15:17:45.803: INFO: Restart count of pod container-probe-2476/liveness-ca3e089d-f3ab-4b45-9ad5-b912a7e51da8 is now 3 (58.649902167s elapsed) +Oct 27 15:18:06.721: INFO: Restart count of pod container-probe-2476/liveness-ca3e089d-f3ab-4b45-9ad5-b912a7e51da8 is now 4 (1m19.568014284s elapsed) +Oct 27 15:19:07.369: INFO: Restart count of pod container-probe-2476/liveness-ca3e089d-f3ab-4b45-9ad5-b912a7e51da8 is now 5 (2m20.215421545s elapsed) +STEP: deleting the pod +[AfterEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:19:07.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-2476" for this suite. +•{"msg":"PASSED [sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":346,"completed":275,"skipped":4669,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Job + should delete a job [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Job + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:19:07.735: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename job +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in job-4675 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should delete a job [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a job +STEP: Ensuring active pods == parallelism +STEP: delete a job +STEP: deleting Job.batch foo in namespace job-4675, will wait for the garbage collector to delete the pods +Oct 27 15:19:12.932: INFO: Deleting Job.batch foo took: 91.039897ms +Oct 27 15:19:13.034: INFO: Terminating Job.batch foo pods took: 101.147489ms +STEP: Ensuring job was deleted +[AfterEach] [sig-apps] Job + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:19:44.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "job-4675" for this suite. +•{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":346,"completed":276,"skipped":4765,"failed":0} +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:19:44.907: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename statefulset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-5905 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:92 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:107 +STEP: Creating service test in namespace statefulset-5905 +[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Initializing watcher for selector baz=blah,foo=bar +STEP: Creating stateful set ss in namespace statefulset-5905 +STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5905 +Oct 27 15:19:46.010: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false +Oct 27 15:19:56.104: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod +Oct 27 15:19:56.194: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5905 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Oct 27 15:19:57.285: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Oct 27 15:19:57.285: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Oct 27 15:19:57.285: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Oct 27 15:19:57.377: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true +Oct 27 15:20:07.469: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false +Oct 27 15:20:07.469: INFO: Waiting for statefulset status.replicas updated to 0 +Oct 27 15:20:07.832: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999658s +Oct 27 15:20:08.923: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.909201833s +Oct 27 15:20:10.014: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.818125176s +Oct 27 15:20:11.105: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.727165753s +Oct 27 15:20:12.196: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.635991721s +Oct 27 15:20:13.288: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.54420119s +Oct 27 15:20:14.379: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.452981162s +Oct 27 15:20:15.470: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.362243793s +Oct 27 15:20:16.562: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.270458009s +Oct 27 15:20:17.653: INFO: Verifying statefulset ss doesn't scale past 1 for another 178.757072ms +STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5905 +Oct 27 15:20:18.744: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5905 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:20:19.760: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Oct 27 15:20:19.760: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Oct 27 15:20:19.760: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Oct 27 15:20:19.851: INFO: Found 1 stateful pods, waiting for 3 +Oct 27 15:20:29.944: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +Oct 27 15:20:29.944: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true +Oct 27 15:20:29.944: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true +STEP: Verifying that stateful set ss was scaled up in order +STEP: Scale down will halt with unhealthy stateful pod +Oct 27 15:20:30.125: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5905 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Oct 27 15:20:31.166: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Oct 27 15:20:31.166: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Oct 27 15:20:31.166: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Oct 27 15:20:31.166: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5905 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Oct 27 15:20:32.241: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Oct 27 15:20:32.241: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Oct 27 15:20:32.241: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Oct 27 15:20:32.242: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5905 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Oct 27 15:20:33.295: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Oct 27 15:20:33.295: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Oct 27 15:20:33.295: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Oct 27 15:20:33.295: INFO: Waiting for statefulset status.replicas updated to 0 +Oct 27 15:20:33.477: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false +Oct 27 15:20:33.477: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false +Oct 27 15:20:33.477: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false +Oct 27 15:20:33.750: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999671s +Oct 27 15:20:34.843: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.907997087s +Oct 27 15:20:35.934: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.816568732s +Oct 27 15:20:37.025: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.725181116s +Oct 27 15:20:38.116: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.634391738s +Oct 27 15:20:39.208: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.543301811s +Oct 27 15:20:40.299: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.451160022s +Oct 27 15:20:41.390: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.359994368s +Oct 27 15:20:42.482: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.268782118s +Oct 27 15:20:43.574: INFO: Verifying statefulset ss doesn't scale past 3 for another 176.855893ms +STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5905 +Oct 27 15:20:44.665: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5905 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:20:45.726: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Oct 27 15:20:45.726: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Oct 27 15:20:45.726: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Oct 27 15:20:45.726: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5905 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:20:46.731: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Oct 27 15:20:46.731: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Oct 27 15:20:46.731: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Oct 27 15:20:46.731: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5905 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:20:47.359: INFO: rc: 1 +Oct 27 15:20:47.360: INFO: Waiting 10s to retry failed RunHostCmd: error running /go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5905 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +error: unable to upgrade connection: container not found ("webserver") + +error: +exit status 1 +Oct 27 15:20:57.364: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5905 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:20:57.776: INFO: rc: 1 +Oct 27 15:20:57.777: INFO: Waiting 10s to retry failed RunHostCmd: error running /go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5905 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 +Oct 27 15:21:07.777: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5905 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:21:08.204: INFO: rc: 1 +Oct 27 15:21:08.204: INFO: Waiting 10s to retry failed RunHostCmd: error running /go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5905 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 +Oct 27 15:21:18.205: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5905 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:21:18.619: INFO: rc: 1 +Oct 27 15:21:18.619: INFO: Waiting 10s to retry failed RunHostCmd: error running /go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5905 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 +Oct 27 15:21:28.620: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5905 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:21:29.037: INFO: rc: 1 +Oct 27 15:21:29.037: INFO: Waiting 10s to retry failed RunHostCmd: error running /go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5905 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 +Oct 27 15:21:39.037: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5905 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:21:39.456: INFO: rc: 1 +Oct 27 15:21:39.456: INFO: Waiting 10s to retry failed RunHostCmd: error running /go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5905 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 +Oct 27 15:21:49.457: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5905 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:21:49.897: INFO: rc: 1 +Oct 27 15:21:49.897: INFO: Waiting 10s to retry failed RunHostCmd: error running /go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5905 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 +Oct 27 15:21:59.898: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5905 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:22:00.319: INFO: rc: 1 +Oct 27 15:22:00.319: INFO: Waiting 10s to retry failed RunHostCmd: error running /go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5905 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 +Oct 27 15:22:10.320: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5905 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:22:10.800: INFO: rc: 1 +Oct 27 15:22:10.801: INFO: Waiting 10s to retry failed RunHostCmd: error running /go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5905 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 +Oct 27 15:22:20.801: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5905 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:22:21.406: INFO: rc: 1 +Oct 27 15:22:21.406: INFO: Waiting 10s to retry failed RunHostCmd: error running /go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5905 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 +Oct 27 15:22:31.409: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5905 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:22:31.824: INFO: rc: 1 +Oct 27 15:22:31.824: INFO: Waiting 10s to retry failed RunHostCmd: error running /go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5905 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 +Oct 27 15:22:41.824: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5905 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:22:42.245: INFO: rc: 1 +Oct 27 15:22:42.245: INFO: Waiting 10s to retry failed RunHostCmd: error running /go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5905 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 +Oct 27 15:22:52.248: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5905 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:22:52.698: INFO: rc: 1 +Oct 27 15:22:52.698: INFO: Waiting 10s to retry failed RunHostCmd: error running /go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5905 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 +Oct 27 15:23:02.699: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5905 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:23:03.114: INFO: rc: 1 +Oct 27 15:23:03.114: INFO: Waiting 10s to retry failed RunHostCmd: error running /go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5905 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 +Oct 27 15:23:13.114: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5905 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:23:13.531: INFO: rc: 1 +Oct 27 15:23:13.531: INFO: Waiting 10s to retry failed RunHostCmd: error running /go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5905 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 +Oct 27 15:23:23.531: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5905 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:23:23.951: INFO: rc: 1 +Oct 27 15:23:23.951: INFO: Waiting 10s to retry failed RunHostCmd: error running /go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5905 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 +Oct 27 15:23:33.951: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5905 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:23:34.379: INFO: rc: 1 +Oct 27 15:23:34.379: INFO: Waiting 10s to retry failed RunHostCmd: error running /go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5905 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 +Oct 27 15:23:44.380: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5905 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:23:45.185: INFO: rc: 1 +Oct 27 15:23:45.186: INFO: Waiting 10s to retry failed RunHostCmd: error running /go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5905 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 +Oct 27 15:23:55.186: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5905 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:23:55.605: INFO: rc: 1 +Oct 27 15:23:55.605: INFO: Waiting 10s to retry failed RunHostCmd: error running /go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5905 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 +Oct 27 15:24:05.607: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5905 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:24:06.028: INFO: rc: 1 +Oct 27 15:24:06.028: INFO: Waiting 10s to retry failed RunHostCmd: error running /go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5905 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 +Oct 27 15:24:16.031: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5905 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:24:16.442: INFO: rc: 1 +Oct 27 15:24:16.442: INFO: Waiting 10s to retry failed RunHostCmd: error running /go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5905 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 +Oct 27 15:24:26.445: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5905 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:24:26.860: INFO: rc: 1 +Oct 27 15:24:26.860: INFO: Waiting 10s to retry failed RunHostCmd: error running /go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5905 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 +Oct 27 15:24:36.864: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5905 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:24:37.290: INFO: rc: 1 +Oct 27 15:24:37.290: INFO: Waiting 10s to retry failed RunHostCmd: error running /go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5905 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 +Oct 27 15:24:47.294: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5905 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:24:47.707: INFO: rc: 1 +Oct 27 15:24:47.708: INFO: Waiting 10s to retry failed RunHostCmd: error running /go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5905 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 +Oct 27 15:24:57.708: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5905 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:24:58.126: INFO: rc: 1 +Oct 27 15:24:58.126: INFO: Waiting 10s to retry failed RunHostCmd: error running /go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5905 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 +Oct 27 15:25:08.127: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5905 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:25:08.551: INFO: rc: 1 +Oct 27 15:25:08.551: INFO: Waiting 10s to retry failed RunHostCmd: error running /go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5905 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 +Oct 27 15:25:18.552: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5905 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:25:18.990: INFO: rc: 1 +Oct 27 15:25:18.990: INFO: Waiting 10s to retry failed RunHostCmd: error running /go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5905 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 +Oct 27 15:25:28.990: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5905 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:25:29.422: INFO: rc: 1 +Oct 27 15:25:29.422: INFO: Waiting 10s to retry failed RunHostCmd: error running /go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5905 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 +Oct 27 15:25:39.422: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5905 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:25:39.835: INFO: rc: 1 +Oct 27 15:25:39.835: INFO: Waiting 10s to retry failed RunHostCmd: error running /go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5905 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 +Oct 27 15:25:49.835: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5905 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:25:50.250: INFO: rc: 1 +Oct 27 15:25:50.250: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: +Oct 27 15:25:50.250: INFO: Scaling statefulset ss to 0 +STEP: Verifying that stateful set ss was scaled down in reverse order +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118 +Oct 27 15:25:50.522: INFO: Deleting all statefulset in ns statefulset-5905 +Oct 27 15:25:50.612: INFO: Scaling statefulset ss to 0 +Oct 27 15:25:50.884: INFO: Waiting for statefulset status.replicas updated to 0 +Oct 27 15:25:50.975: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:25:51.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-5905" for this suite. + +• [SLOW TEST:366.611 seconds] +[sig-apps] StatefulSet +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97 + Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":346,"completed":277,"skipped":4784,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + patching/updating a mutating webhook should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:25:51.519: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-9491 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 27 15:25:53.596: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770945153, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770945153, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770945153, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770945153, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 27 15:25:56.784: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] patching/updating a mutating webhook should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a mutating webhook configuration +STEP: Updating a mutating webhook configuration's rules to not include the create operation +STEP: Creating a configMap that should not be mutated +STEP: Patching a mutating webhook configuration's rules to include the create operation +STEP: Creating a configMap that should be mutated +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:25:57.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-9491" for this suite. +STEP: Destroying namespace "webhook-9491-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":346,"completed":278,"skipped":4798,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:25:58.713: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-7270 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name projected-configmap-test-volume-2025b0f2-21bc-414e-9fff-def2a5caae8e +STEP: Creating a pod to test consume configMaps +Oct 27 15:25:59.631: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9605aafe-2ddf-46f0-aca0-2093574e1da5" in namespace "projected-7270" to be "Succeeded or Failed" +Oct 27 15:25:59.722: INFO: Pod "pod-projected-configmaps-9605aafe-2ddf-46f0-aca0-2093574e1da5": Phase="Pending", Reason="", readiness=false. Elapsed: 90.461661ms +Oct 27 15:26:01.813: INFO: Pod "pod-projected-configmaps-9605aafe-2ddf-46f0-aca0-2093574e1da5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.181656671s +STEP: Saw pod success +Oct 27 15:26:01.813: INFO: Pod "pod-projected-configmaps-9605aafe-2ddf-46f0-aca0-2093574e1da5" satisfied condition "Succeeded or Failed" +Oct 27 15:26:01.903: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod pod-projected-configmaps-9605aafe-2ddf-46f0-aca0-2093574e1da5 container agnhost-container: +STEP: delete the pod +Oct 27 15:26:02.135: INFO: Waiting for pod pod-projected-configmaps-9605aafe-2ddf-46f0-aca0-2093574e1da5 to disappear +Oct 27 15:26:02.225: INFO: Pod pod-projected-configmaps-9605aafe-2ddf-46f0-aca0-2093574e1da5 no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:26:02.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-7270" for this suite. +•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":346,"completed":279,"skipped":4848,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-node] Kubelet when scheduling a read only busybox container + should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:26:02.496: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubelet-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubelet-test-3290 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 +[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:26:03.441: INFO: The status of Pod busybox-readonly-fs24429ab7-42f9-4e74-bed4-fd5ffab4b296 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:26:05.532: INFO: The status of Pod busybox-readonly-fs24429ab7-42f9-4e74-bed4-fd5ffab4b296 is Running (Ready = true) +[AfterEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:26:05.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubelet-test-3290" for this suite. +•{"msg":"PASSED [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":280,"skipped":4861,"failed":0} +SSSS +------------------------------ +[sig-network] DNS + should support configurable pod DNS nameservers [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:26:06.031: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename dns +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-2071 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support configurable pod DNS nameservers [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... +Oct 27 15:26:06.858: INFO: Created pod &Pod{ObjectMeta:{test-dns-nameservers dns-2071 e831fa2f-e2fc-483c-a6c1-2006a6ab45eb 39755 0 2021-10-27 15:26:06 +0000 UTC map[] map[kubernetes.io/psp:e2e-test-privileged-psp] [] [] [{e2e.test Update v1 2021-10-27 15:26:06 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-9mbjg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tm94z-0j6.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9mbjg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:26:06.948: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:26:09.039: INFO: The status of Pod test-dns-nameservers is Running (Ready = true) +STEP: Verifying customized DNS suffix list is configured on pod... +Oct 27 15:26:09.039: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-2071 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 15:26:09.039: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Verifying customized DNS server is configured on pod... +Oct 27 15:26:09.768: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-2071 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 15:26:09.768: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 15:26:10.490: INFO: Deleting pod test-dns-nameservers... +[AfterEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:26:10.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-2071" for this suite. +•{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":346,"completed":281,"skipped":4865,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected combined + should project all components that make up the projection API [Projection][NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected combined + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:26:10.854: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-3984 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name configmap-projected-all-test-volume-5f560369-65bf-4716-ae54-422a9a8f3647 +STEP: Creating secret with name secret-projected-all-test-volume-a7411e07-ccf7-44cc-927d-0858f5df62c7 +STEP: Creating a pod to test Check all projections for projected volume plugin +Oct 27 15:26:11.864: INFO: Waiting up to 5m0s for pod "projected-volume-9773c0a2-80ff-4423-902b-025e70c30f85" in namespace "projected-3984" to be "Succeeded or Failed" +Oct 27 15:26:11.954: INFO: Pod "projected-volume-9773c0a2-80ff-4423-902b-025e70c30f85": Phase="Pending", Reason="", readiness=false. Elapsed: 90.384012ms +Oct 27 15:26:14.046: INFO: Pod "projected-volume-9773c0a2-80ff-4423-902b-025e70c30f85": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.181847941s +STEP: Saw pod success +Oct 27 15:26:14.046: INFO: Pod "projected-volume-9773c0a2-80ff-4423-902b-025e70c30f85" satisfied condition "Succeeded or Failed" +Oct 27 15:26:14.136: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod projected-volume-9773c0a2-80ff-4423-902b-025e70c30f85 container projected-all-volume-test: +STEP: delete the pod +Oct 27 15:26:14.337: INFO: Waiting for pod projected-volume-9773c0a2-80ff-4423-902b-025e70c30f85 to disappear +Oct 27 15:26:14.426: INFO: Pod projected-volume-9773c0a2-80ff-4423-902b-025e70c30f85 no longer exists +[AfterEach] [sig-storage] Projected combined + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:26:14.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-3984" for this suite. +•{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":346,"completed":282,"skipped":4879,"failed":0} +SSSSSSSSS +------------------------------ +[sig-network] Services + should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:26:14.697: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-1422 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating service in namespace services-1422 +STEP: creating service affinity-clusterip in namespace services-1422 +STEP: creating replication controller affinity-clusterip in namespace services-1422 +I1027 15:26:15.615015 5725 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-1422, replica count: 3 +I1027 15:26:18.717030 5725 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Oct 27 15:26:18.896: INFO: Creating new exec pod +Oct 27 15:26:22.172: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-1422 exec execpod-affinitylg28j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' +Oct 27 15:26:23.253: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" +Oct 27 15:26:23.253: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 15:26:23.253: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-1422 exec execpod-affinitylg28j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.67.34.214 80' +Oct 27 15:26:24.308: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 100.67.34.214 80\nConnection to 100.67.34.214 80 port [tcp/http] succeeded!\n" +Oct 27 15:26:24.308: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 15:26:24.308: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-1422 exec execpod-affinitylg28j -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://100.67.34.214:80/ ; done' +Oct 27 15:26:25.403: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.67.34.214:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.67.34.214:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.67.34.214:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.67.34.214:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.67.34.214:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.67.34.214:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.67.34.214:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.67.34.214:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.67.34.214:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.67.34.214:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.67.34.214:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.67.34.214:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.67.34.214:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.67.34.214:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.67.34.214:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.67.34.214:80/\n" +Oct 27 15:26:25.404: INFO: stdout: "\naffinity-clusterip-drlnk\naffinity-clusterip-drlnk\naffinity-clusterip-drlnk\naffinity-clusterip-drlnk\naffinity-clusterip-drlnk\naffinity-clusterip-drlnk\naffinity-clusterip-drlnk\naffinity-clusterip-drlnk\naffinity-clusterip-drlnk\naffinity-clusterip-drlnk\naffinity-clusterip-drlnk\naffinity-clusterip-drlnk\naffinity-clusterip-drlnk\naffinity-clusterip-drlnk\naffinity-clusterip-drlnk\naffinity-clusterip-drlnk" +Oct 27 15:26:25.404: INFO: Received response from host: affinity-clusterip-drlnk +Oct 27 15:26:25.404: INFO: Received response from host: affinity-clusterip-drlnk +Oct 27 15:26:25.404: INFO: Received response from host: affinity-clusterip-drlnk +Oct 27 15:26:25.404: INFO: Received response from host: affinity-clusterip-drlnk +Oct 27 15:26:25.404: INFO: Received response from host: affinity-clusterip-drlnk +Oct 27 15:26:25.404: INFO: Received response from host: affinity-clusterip-drlnk +Oct 27 15:26:25.404: INFO: Received response from host: affinity-clusterip-drlnk +Oct 27 15:26:25.404: INFO: Received response from host: affinity-clusterip-drlnk +Oct 27 15:26:25.404: INFO: Received response from host: affinity-clusterip-drlnk +Oct 27 15:26:25.404: INFO: Received response from host: affinity-clusterip-drlnk +Oct 27 15:26:25.404: INFO: Received response from host: affinity-clusterip-drlnk +Oct 27 15:26:25.404: INFO: Received response from host: affinity-clusterip-drlnk +Oct 27 15:26:25.404: INFO: Received response from host: affinity-clusterip-drlnk +Oct 27 15:26:25.404: INFO: Received response from host: affinity-clusterip-drlnk +Oct 27 15:26:25.404: INFO: Received response from host: affinity-clusterip-drlnk +Oct 27 15:26:25.404: INFO: Received response from host: affinity-clusterip-drlnk +Oct 27 15:26:25.404: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-clusterip in namespace services-1422, will wait for the garbage collector to delete the pods +Oct 27 15:26:25.781: INFO: Deleting ReplicationController affinity-clusterip took: 91.768462ms +Oct 27 15:26:25.881: INFO: Terminating ReplicationController affinity-clusterip pods took: 100.715417ms +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:26:28.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-1422" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":346,"completed":283,"skipped":4888,"failed":0} +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:26:28.661: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-6622 +STEP: Waiting for a default service account to be provisioned in namespace +[It] updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating projection with configMap that has name projected-configmap-test-upd-df620697-1489-4e10-bff4-1a12ebaace48 +STEP: Creating the pod +Oct 27 15:26:29.761: INFO: The status of Pod pod-projected-configmaps-05b2644c-d33e-4e93-a12a-4091d907d15d is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:26:31.853: INFO: The status of Pod pod-projected-configmaps-05b2644c-d33e-4e93-a12a-4091d907d15d is Running (Ready = true) +STEP: Updating configmap projected-configmap-test-upd-df620697-1489-4e10-bff4-1a12ebaace48 +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:26:34.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-6622" for this suite. +•{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":284,"skipped":4905,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should complete a service status lifecycle [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:26:34.595: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-9695 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should complete a service status lifecycle [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a Service +STEP: watching for the Service to be added +Oct 27 15:26:35.602: INFO: Found Service test-service-x687n in namespace services-9695 with labels: map[test-service-static:true] & ports [{http TCP 80 {0 80 } 0}] +Oct 27 15:26:35.602: INFO: Service test-service-x687n created +STEP: Getting /status +Oct 27 15:26:35.692: INFO: Service test-service-x687n has LoadBalancer: {[]} +STEP: patching the ServiceStatus +STEP: watching for the Service to be patched +Oct 27 15:26:35.873: INFO: observed Service test-service-x687n in namespace services-9695 with annotations: map[] & LoadBalancer: {[]} +Oct 27 15:26:35.874: INFO: Found Service test-service-x687n in namespace services-9695 with annotations: map[patchedstatus:true] & LoadBalancer: {[{203.0.113.1 []}]} +Oct 27 15:26:35.874: INFO: Service test-service-x687n has service status patched +STEP: updating the ServiceStatus +Oct 27 15:26:36.055: INFO: updatedStatus.Conditions: []v1.Condition{v1.Condition{Type:"StatusUpdate", Status:"True", ObservedGeneration:0, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"E2E", Message:"Set from e2e test"}} +STEP: watching for the Service to be updated +Oct 27 15:26:36.144: INFO: Observed Service test-service-x687n in namespace services-9695 with annotations: map[] & Conditions: {[]} +Oct 27 15:26:36.144: INFO: Observed event: &Service{ObjectMeta:{test-service-x687n services-9695 553f8092-93a8-41cf-9b50-86d33c3274dd 40073 0 2021-10-27 15:26:35 +0000 UTC map[test-service-static:true] map[patchedstatus:true] [] [] [{e2e.test Update v1 2021-10-27 15:26:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:test-service-static":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:sessionAffinity":{},"f:type":{}}} } {e2e.test Update v1 2021-10-27 15:26:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:patchedstatus":{}}},"f:status":{"f:loadBalancer":{"f:ingress":{}}}} status}]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{0 80 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:100.67.107.223,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[100.67.107.223],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{LoadBalancerIngress{IP:203.0.113.1,Hostname:,Ports:[]PortStatus{},},},},Conditions:[]Condition{},},} +Oct 27 15:26:36.145: INFO: Found Service test-service-x687n in namespace services-9695 with annotations: map[patchedstatus:true] & Conditions: [{StatusUpdate True 0 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] +Oct 27 15:26:36.145: INFO: Service test-service-x687n has service status updated +STEP: patching the service +STEP: watching for the Service to be patched +Oct 27 15:26:36.327: INFO: observed Service test-service-x687n in namespace services-9695 with labels: map[test-service-static:true] +Oct 27 15:26:36.327: INFO: observed Service test-service-x687n in namespace services-9695 with labels: map[test-service-static:true] +Oct 27 15:26:36.327: INFO: observed Service test-service-x687n in namespace services-9695 with labels: map[test-service-static:true] +Oct 27 15:26:36.327: INFO: Found Service test-service-x687n in namespace services-9695 with labels: map[test-service:patched test-service-static:true] +Oct 27 15:26:36.327: INFO: Service test-service-x687n patched +STEP: deleting the service +STEP: watching for the Service to be deleted +Oct 27 15:26:36.511: INFO: Observed event: ADDED +Oct 27 15:26:36.512: INFO: Observed event: MODIFIED +Oct 27 15:26:36.512: INFO: Observed event: MODIFIED +Oct 27 15:26:36.512: INFO: Observed event: MODIFIED +Oct 27 15:26:36.512: INFO: Found Service test-service-x687n in namespace services-9695 with labels: map[test-service:patched test-service-static:true] & annotations: map[patchedstatus:true] +Oct 27 15:26:36.512: INFO: Service test-service-x687n deleted +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:26:36.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-9695" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should complete a service status lifecycle [Conformance]","total":346,"completed":285,"skipped":4939,"failed":0} + +------------------------------ +[sig-cli] Kubectl client Kubectl diff + should check if kubectl diff finds a difference for Deployments [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:26:36.694: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-8251 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should check if kubectl diff finds a difference for Deployments [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create deployment with httpd image +Oct 27 15:26:37.426: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-8251 create -f -' +Oct 27 15:26:37.946: INFO: stderr: "" +Oct 27 15:26:37.946: INFO: stdout: "deployment.apps/httpd-deployment created\n" +STEP: verify diff finds difference between live and declared image +Oct 27 15:26:37.946: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-8251 diff -f -' +Oct 27 15:26:38.584: INFO: rc: 1 +Oct 27 15:26:38.584: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-8251 delete -f -' +Oct 27 15:26:38.995: INFO: stderr: "" +Oct 27 15:26:38.995: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:26:38.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-8251" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":346,"completed":286,"skipped":4939,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Namespaces [Serial] + should ensure that all pods are removed when a namespace is deleted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:26:39.267: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename namespaces +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in namespaces-6737 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should ensure that all pods are removed when a namespace is deleted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a test namespace +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nsdeletetest-2294 +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Creating a pod in the namespace +STEP: Waiting for the pod to have running status +STEP: Deleting the namespace +STEP: Waiting for the namespace to be removed. +STEP: Recreating the namespace +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nsdeletetest-3567 +STEP: Verifying there are no pods in the namespace +[AfterEach] [sig-api-machinery] Namespaces [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:26:56.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "namespaces-6737" for this suite. +STEP: Destroying namespace "nsdeletetest-2294" for this suite. +Oct 27 15:26:57.193: INFO: Namespace nsdeletetest-2294 was already deleted +STEP: Destroying namespace "nsdeletetest-3567" for this suite. +•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":346,"completed":287,"skipped":4973,"failed":0} +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:26:57.284: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-7337 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0644 on node default medium +Oct 27 15:26:58.112: INFO: Waiting up to 5m0s for pod "pod-09519dfb-15ba-48ee-9ff8-77645edf2c22" in namespace "emptydir-7337" to be "Succeeded or Failed" +Oct 27 15:26:58.203: INFO: Pod "pod-09519dfb-15ba-48ee-9ff8-77645edf2c22": Phase="Pending", Reason="", readiness=false. Elapsed: 90.590879ms +Oct 27 15:27:00.294: INFO: Pod "pod-09519dfb-15ba-48ee-9ff8-77645edf2c22": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.181597218s +STEP: Saw pod success +Oct 27 15:27:00.294: INFO: Pod "pod-09519dfb-15ba-48ee-9ff8-77645edf2c22" satisfied condition "Succeeded or Failed" +Oct 27 15:27:00.384: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod pod-09519dfb-15ba-48ee-9ff8-77645edf2c22 container test-container: +STEP: delete the pod +Oct 27 15:27:00.615: INFO: Waiting for pod pod-09519dfb-15ba-48ee-9ff8-77645edf2c22 to disappear +Oct 27 15:27:00.705: INFO: Pod pod-09519dfb-15ba-48ee-9ff8-77645edf2c22 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:27:00.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-7337" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":288,"skipped":4991,"failed":0} +SSSSSSSSS +------------------------------ +[sig-apps] ReplicaSet + should list and delete a collection of ReplicaSets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:27:00.975: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename replicaset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replicaset-2622 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should list and delete a collection of ReplicaSets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Create a ReplicaSet +STEP: Verify that the required pods have come up +Oct 27 15:27:01.891: INFO: Pod name sample-pod: Found 3 pods out of 3 +STEP: ensuring each pod is running +Oct 27 15:27:04.167: INFO: Replica Status: {Replicas:3 FullyLabeledReplicas:3 ReadyReplicas:3 AvailableReplicas:3 ObservedGeneration:1 Conditions:[]} +STEP: Listing all ReplicaSets +STEP: DeleteCollection of the ReplicaSets +STEP: After DeleteCollection verify that ReplicaSets have been deleted +[AfterEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:27:04.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replicaset-2622" for this suite. +•{"msg":"PASSED [sig-apps] ReplicaSet should list and delete a collection of ReplicaSets [Conformance]","total":346,"completed":289,"skipped":5000,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:27:04.719: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-3816 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0666 on tmpfs +Oct 27 15:27:05.548: INFO: Waiting up to 5m0s for pod "pod-1810de8e-2658-4c11-bcd2-401407ac7e09" in namespace "emptydir-3816" to be "Succeeded or Failed" +Oct 27 15:27:05.638: INFO: Pod "pod-1810de8e-2658-4c11-bcd2-401407ac7e09": Phase="Pending", Reason="", readiness=false. Elapsed: 90.226496ms +Oct 27 15:27:07.801: INFO: Pod "pod-1810de8e-2658-4c11-bcd2-401407ac7e09": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.25283033s +STEP: Saw pod success +Oct 27 15:27:07.801: INFO: Pod "pod-1810de8e-2658-4c11-bcd2-401407ac7e09" satisfied condition "Succeeded or Failed" +Oct 27 15:27:07.891: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod pod-1810de8e-2658-4c11-bcd2-401407ac7e09 container test-container: +STEP: delete the pod +Oct 27 15:27:08.122: INFO: Waiting for pod pod-1810de8e-2658-4c11-bcd2-401407ac7e09 to disappear +Oct 27 15:27:08.212: INFO: Pod pod-1810de8e-2658-4c11-bcd2-401407ac7e09 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:27:08.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-3816" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":290,"skipped":5087,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Networking Granular Checks: Pods + should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Networking + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:27:08.482: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename pod-network-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pod-network-test-911 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Performing setup for networking test in namespace pod-network-test-911 +STEP: creating a selector +STEP: Creating the service pods in kubernetes +Oct 27 15:27:09.215: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +Oct 27 15:27:09.681: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:27:11.772: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 15:27:13.772: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 15:27:15.773: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 15:27:17.773: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 15:27:19.772: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 15:27:21.772: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 15:27:23.772: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 15:27:25.772: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 15:27:27.772: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 15:27:29.773: INFO: The status of Pod netserver-0 is Running (Ready = true) +Oct 27 15:27:29.954: INFO: The status of Pod netserver-1 is Running (Ready = true) +STEP: Creating test pods +Oct 27 15:27:32.687: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 +Oct 27 15:27:32.687: INFO: Going to poll 100.96.1.100 on port 8081 at least 0 times, with a maximum of 34 tries before failing +Oct 27 15:27:32.777: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.1.100 8081 | grep -v '^\s*$'] Namespace:pod-network-test-911 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 15:27:32.777: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 15:27:34.441: INFO: Found all 1 expected endpoints: [netserver-0] +Oct 27 15:27:34.441: INFO: Going to poll 100.96.0.81 on port 8081 at least 0 times, with a maximum of 34 tries before failing +Oct 27 15:27:34.531: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.0.81 8081 | grep -v '^\s*$'] Namespace:pod-network-test-911 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 15:27:34.531: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 15:27:36.218: INFO: Found all 1 expected endpoints: [netserver-1] +[AfterEach] [sig-network] Networking + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:27:36.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pod-network-test-911" for this suite. +•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":291,"skipped":5103,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Proxy server + should support proxy with --port 0 [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:27:36.558: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-6267 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should support proxy with --port 0 [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: starting the proxy server +Oct 27 15:27:37.293: INFO: Asynchronously running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-6267 proxy -p 0 --disable-filter' +STEP: curling proxy /api/ output +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:27:37.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-6267" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":346,"completed":292,"skipped":5115,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should update annotations on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:27:37.891: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-6302 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should update annotations on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating the pod +Oct 27 15:27:38.812: INFO: The status of Pod annotationupdate7a90adbe-6d49-41e3-8e35-8587392f2fd5 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:27:40.902: INFO: The status of Pod annotationupdate7a90adbe-6d49-41e3-8e35-8587392f2fd5 is Running (Ready = true) +Oct 27 15:27:41.774: INFO: Successfully updated pod "annotationupdate7a90adbe-6d49-41e3-8e35-8587392f2fd5" +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:27:43.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-6302" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":346,"completed":293,"skipped":5131,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-node] KubeletManagedEtcHosts + should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] KubeletManagedEtcHosts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:27:44.244: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-kubelet-etc-hosts-2000 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Setting up the test +STEP: Creating hostNetwork=false pod +Oct 27 15:27:45.165: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:27:47.256: INFO: The status of Pod test-pod is Running (Ready = true) +STEP: Creating hostNetwork=true pod +Oct 27 15:27:47.532: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:27:49.624: INFO: The status of Pod test-host-network-pod is Running (Ready = true) +STEP: Running the test +STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false +Oct 27 15:27:49.714: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2000 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 15:27:49.714: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 15:27:50.405: INFO: Exec stderr: "" +Oct 27 15:27:50.405: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2000 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 15:27:50.405: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 15:27:51.077: INFO: Exec stderr: "" +Oct 27 15:27:51.077: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2000 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 15:27:51.077: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 15:27:51.744: INFO: Exec stderr: "" +Oct 27 15:27:51.744: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2000 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 15:27:51.744: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 15:27:52.400: INFO: Exec stderr: "" +STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount +Oct 27 15:27:52.400: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2000 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 15:27:52.400: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 15:27:53.123: INFO: Exec stderr: "" +Oct 27 15:27:53.123: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2000 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 15:27:53.123: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 15:27:53.774: INFO: Exec stderr: "" +STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true +Oct 27 15:27:53.774: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2000 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 15:27:53.774: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 15:27:54.434: INFO: Exec stderr: "" +Oct 27 15:27:54.434: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2000 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 15:27:54.434: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 15:27:55.140: INFO: Exec stderr: "" +Oct 27 15:27:55.140: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2000 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 15:27:55.140: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 15:27:55.819: INFO: Exec stderr: "" +Oct 27 15:27:55.819: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2000 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 15:27:55.819: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 15:27:56.491: INFO: Exec stderr: "" +[AfterEach] [sig-node] KubeletManagedEtcHosts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:27:56.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-kubelet-etc-hosts-2000" for this suite. +•{"msg":"PASSED [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":294,"skipped":5145,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a replication controller. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:27:56.763: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename resourcequota +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-280 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Counting existing ResourceQuota +STEP: Creating a ResourceQuota +STEP: Ensuring resource quota status is calculated +STEP: Creating a ReplicationController +STEP: Ensuring resource quota status captures replication controller creation +STEP: Deleting a ReplicationController +STEP: Ensuring resource quota status released usage +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:28:09.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-280" for this suite. +•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":346,"completed":295,"skipped":5179,"failed":0} +SS +------------------------------ +[sig-cli] Kubectl client Update Demo + should scale a replication controller [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:28:09.411: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-6153 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[BeforeEach] Update Demo + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:296 +[It] should scale a replication controller [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a replication controller +Oct 27 15:28:10.143: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-6153 create -f -' +Oct 27 15:28:10.723: INFO: stderr: "" +Oct 27 15:28:10.723: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" +STEP: waiting for all containers in name=update-demo pods to come up. +Oct 27 15:28:10.724: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-6153 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 27 15:28:11.156: INFO: stderr: "" +Oct 27 15:28:11.156: INFO: stdout: "update-demo-nautilus-qf5ck update-demo-nautilus-wsb4d " +Oct 27 15:28:11.156: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-6153 get pods update-demo-nautilus-qf5ck -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 27 15:28:11.497: INFO: stderr: "" +Oct 27 15:28:11.497: INFO: stdout: "" +Oct 27 15:28:11.497: INFO: update-demo-nautilus-qf5ck is created but not running +Oct 27 15:28:16.500: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-6153 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 27 15:28:16.927: INFO: stderr: "" +Oct 27 15:28:16.927: INFO: stdout: "update-demo-nautilus-qf5ck update-demo-nautilus-wsb4d " +Oct 27 15:28:16.927: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-6153 get pods update-demo-nautilus-qf5ck -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 27 15:28:17.263: INFO: stderr: "" +Oct 27 15:28:17.263: INFO: stdout: "true" +Oct 27 15:28:17.263: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-6153 get pods update-demo-nautilus-qf5ck -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Oct 27 15:28:17.596: INFO: stderr: "" +Oct 27 15:28:17.596: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" +Oct 27 15:28:17.596: INFO: validating pod update-demo-nautilus-qf5ck +Oct 27 15:28:17.697: INFO: got data: { + "image": "nautilus.jpg" +} + +Oct 27 15:28:17.697: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Oct 27 15:28:17.697: INFO: update-demo-nautilus-qf5ck is verified up and running +Oct 27 15:28:17.697: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-6153 get pods update-demo-nautilus-wsb4d -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 27 15:28:18.023: INFO: stderr: "" +Oct 27 15:28:18.023: INFO: stdout: "true" +Oct 27 15:28:18.023: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-6153 get pods update-demo-nautilus-wsb4d -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Oct 27 15:28:18.366: INFO: stderr: "" +Oct 27 15:28:18.366: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" +Oct 27 15:28:18.366: INFO: validating pod update-demo-nautilus-wsb4d +Oct 27 15:28:18.465: INFO: got data: { + "image": "nautilus.jpg" +} + +Oct 27 15:28:18.465: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Oct 27 15:28:18.465: INFO: update-demo-nautilus-wsb4d is verified up and running +STEP: scaling down the replication controller +Oct 27 15:28:18.468: INFO: scanned /root for discovery docs: +Oct 27 15:28:18.468: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-6153 scale rc update-demo-nautilus --replicas=1 --timeout=5m' +Oct 27 15:28:18.994: INFO: stderr: "" +Oct 27 15:28:18.994: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" +STEP: waiting for all containers in name=update-demo pods to come up. +Oct 27 15:28:18.994: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-6153 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 27 15:28:19.411: INFO: stderr: "" +Oct 27 15:28:19.411: INFO: stdout: "update-demo-nautilus-qf5ck update-demo-nautilus-wsb4d " +STEP: Replicas for name=update-demo: expected=1 actual=2 +Oct 27 15:28:24.412: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-6153 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 27 15:28:24.759: INFO: stderr: "" +Oct 27 15:28:24.759: INFO: stdout: "update-demo-nautilus-qf5ck " +Oct 27 15:28:24.759: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-6153 get pods update-demo-nautilus-qf5ck -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 27 15:28:25.096: INFO: stderr: "" +Oct 27 15:28:25.096: INFO: stdout: "true" +Oct 27 15:28:25.096: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-6153 get pods update-demo-nautilus-qf5ck -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Oct 27 15:28:25.416: INFO: stderr: "" +Oct 27 15:28:25.416: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" +Oct 27 15:28:25.416: INFO: validating pod update-demo-nautilus-qf5ck +Oct 27 15:28:25.552: INFO: got data: { + "image": "nautilus.jpg" +} + +Oct 27 15:28:25.552: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Oct 27 15:28:25.552: INFO: update-demo-nautilus-qf5ck is verified up and running +STEP: scaling up the replication controller +Oct 27 15:28:25.555: INFO: scanned /root for discovery docs: +Oct 27 15:28:25.555: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-6153 scale rc update-demo-nautilus --replicas=2 --timeout=5m' +Oct 27 15:28:26.076: INFO: stderr: "" +Oct 27 15:28:26.076: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" +STEP: waiting for all containers in name=update-demo pods to come up. +Oct 27 15:28:26.076: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-6153 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 27 15:28:26.489: INFO: stderr: "" +Oct 27 15:28:26.489: INFO: stdout: "update-demo-nautilus-qf5ck update-demo-nautilus-w9nng " +Oct 27 15:28:26.489: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-6153 get pods update-demo-nautilus-qf5ck -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 27 15:28:26.811: INFO: stderr: "" +Oct 27 15:28:26.811: INFO: stdout: "true" +Oct 27 15:28:26.812: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-6153 get pods update-demo-nautilus-qf5ck -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Oct 27 15:28:27.145: INFO: stderr: "" +Oct 27 15:28:27.145: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" +Oct 27 15:28:27.145: INFO: validating pod update-demo-nautilus-qf5ck +Oct 27 15:28:27.238: INFO: got data: { + "image": "nautilus.jpg" +} + +Oct 27 15:28:27.238: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Oct 27 15:28:27.238: INFO: update-demo-nautilus-qf5ck is verified up and running +Oct 27 15:28:27.238: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-6153 get pods update-demo-nautilus-w9nng -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 27 15:28:27.584: INFO: stderr: "" +Oct 27 15:28:27.584: INFO: stdout: "" +Oct 27 15:28:27.584: INFO: update-demo-nautilus-w9nng is created but not running +Oct 27 15:28:32.585: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-6153 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 27 15:28:32.997: INFO: stderr: "" +Oct 27 15:28:32.997: INFO: stdout: "update-demo-nautilus-qf5ck update-demo-nautilus-w9nng " +Oct 27 15:28:32.997: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-6153 get pods update-demo-nautilus-qf5ck -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 27 15:28:33.321: INFO: stderr: "" +Oct 27 15:28:33.321: INFO: stdout: "true" +Oct 27 15:28:33.321: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-6153 get pods update-demo-nautilus-qf5ck -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Oct 27 15:28:33.655: INFO: stderr: "" +Oct 27 15:28:33.655: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" +Oct 27 15:28:33.655: INFO: validating pod update-demo-nautilus-qf5ck +Oct 27 15:28:33.792: INFO: got data: { + "image": "nautilus.jpg" +} + +Oct 27 15:28:33.792: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Oct 27 15:28:33.792: INFO: update-demo-nautilus-qf5ck is verified up and running +Oct 27 15:28:33.792: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-6153 get pods update-demo-nautilus-w9nng -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 27 15:28:34.115: INFO: stderr: "" +Oct 27 15:28:34.115: INFO: stdout: "true" +Oct 27 15:28:34.115: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-6153 get pods update-demo-nautilus-w9nng -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Oct 27 15:28:34.462: INFO: stderr: "" +Oct 27 15:28:34.462: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" +Oct 27 15:28:34.462: INFO: validating pod update-demo-nautilus-w9nng +Oct 27 15:28:34.562: INFO: got data: { + "image": "nautilus.jpg" +} + +Oct 27 15:28:34.563: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Oct 27 15:28:34.563: INFO: update-demo-nautilus-w9nng is verified up and running +STEP: using delete to clean up resources +Oct 27 15:28:34.563: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-6153 delete --grace-period=0 --force -f -' +Oct 27 15:28:34.988: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Oct 27 15:28:34.988: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" +Oct 27 15:28:34.988: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-6153 get rc,svc -l name=update-demo --no-headers' +Oct 27 15:28:35.409: INFO: stderr: "No resources found in kubectl-6153 namespace.\n" +Oct 27 15:28:35.409: INFO: stdout: "" +Oct 27 15:28:35.410: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-6153 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' +Oct 27 15:28:35.836: INFO: stderr: "" +Oct 27 15:28:35.836: INFO: stdout: "" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:28:35.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-6153" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":346,"completed":296,"skipped":5181,"failed":0} +SSSSS +------------------------------ +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition + creating/deleting custom resource definition objects works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:28:36.106: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename custom-resource-definition +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in custom-resource-definition-3425 +STEP: Waiting for a default service account to be provisioned in namespace +[It] creating/deleting custom resource definition objects works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:28:36.838: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:28:37.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "custom-resource-definition-3425" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":346,"completed":297,"skipped":5186,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide container's cpu limit [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:28:37.382: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-2587 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should provide container's cpu limit [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 27 15:28:38.211: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e8d4127b-afb9-4b36-a4d3-6dc2e106cfa4" in namespace "downward-api-2587" to be "Succeeded or Failed" +Oct 27 15:28:38.301: INFO: Pod "downwardapi-volume-e8d4127b-afb9-4b36-a4d3-6dc2e106cfa4": Phase="Pending", Reason="", readiness=false. Elapsed: 90.496689ms +Oct 27 15:28:40.393: INFO: Pod "downwardapi-volume-e8d4127b-afb9-4b36-a4d3-6dc2e106cfa4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.182523536s +STEP: Saw pod success +Oct 27 15:28:40.393: INFO: Pod "downwardapi-volume-e8d4127b-afb9-4b36-a4d3-6dc2e106cfa4" satisfied condition "Succeeded or Failed" +Oct 27 15:28:40.483: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod downwardapi-volume-e8d4127b-afb9-4b36-a4d3-6dc2e106cfa4 container client-container: +STEP: delete the pod +Oct 27 15:28:40.715: INFO: Waiting for pod downwardapi-volume-e8d4127b-afb9-4b36-a4d3-6dc2e106cfa4 to disappear +Oct 27 15:28:40.805: INFO: Pod downwardapi-volume-e8d4127b-afb9-4b36-a4d3-6dc2e106cfa4 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:28:40.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-2587" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":346,"completed":298,"skipped":5201,"failed":0} +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:28:41.076: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-9464 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name projected-configmap-test-volume-92908238-8d6d-4224-ab6d-51230a74d69b +STEP: Creating a pod to test consume configMaps +Oct 27 15:28:42.061: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8bff83ec-0a95-4188-b6d9-1dffa384ee5f" in namespace "projected-9464" to be "Succeeded or Failed" +Oct 27 15:28:42.151: INFO: Pod "pod-projected-configmaps-8bff83ec-0a95-4188-b6d9-1dffa384ee5f": Phase="Pending", Reason="", readiness=false. Elapsed: 90.65509ms +Oct 27 15:28:44.242: INFO: Pod "pod-projected-configmaps-8bff83ec-0a95-4188-b6d9-1dffa384ee5f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.181272835s +STEP: Saw pod success +Oct 27 15:28:44.242: INFO: Pod "pod-projected-configmaps-8bff83ec-0a95-4188-b6d9-1dffa384ee5f" satisfied condition "Succeeded or Failed" +Oct 27 15:28:44.332: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod pod-projected-configmaps-8bff83ec-0a95-4188-b6d9-1dffa384ee5f container agnhost-container: +STEP: delete the pod +Oct 27 15:28:44.522: INFO: Waiting for pod pod-projected-configmaps-8bff83ec-0a95-4188-b6d9-1dffa384ee5f to disappear +Oct 27 15:28:44.612: INFO: Pod pod-projected-configmaps-8bff83ec-0a95-4188-b6d9-1dffa384ee5f no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:28:44.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-9464" for this suite. +•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":299,"skipped":5221,"failed":0} +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:28:44.882: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-5917 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating service in namespace services-5917 +Oct 27 15:28:45.803: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:28:47.893: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true) +Oct 27 15:28:47.984: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-5917 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' +Oct 27 15:28:49.031: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" +Oct 27 15:28:49.032: INFO: stdout: "iptables" +Oct 27 15:28:49.032: INFO: proxyMode: iptables +Oct 27 15:28:49.124: INFO: Waiting for pod kube-proxy-mode-detector to disappear +Oct 27 15:28:49.214: INFO: Pod kube-proxy-mode-detector no longer exists +STEP: creating service affinity-nodeport-timeout in namespace services-5917 +STEP: creating replication controller affinity-nodeport-timeout in namespace services-5917 +I1027 15:28:49.403187 5725 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-5917, replica count: 3 +I1027 15:28:52.504124 5725 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Oct 27 15:28:52.863: INFO: Creating new exec pod +Oct 27 15:28:56.318: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-5917 exec execpod-affinityd7d6q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' +Oct 27 15:28:57.399: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" +Oct 27 15:28:57.399: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 15:28:57.399: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-5917 exec execpod-affinityd7d6q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.64.230.54 80' +Oct 27 15:28:58.419: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 100.64.230.54 80\nConnection to 100.64.230.54 80 port [tcp/http] succeeded!\n" +Oct 27 15:28:58.419: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 15:28:58.419: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-5917 exec execpod-affinityd7d6q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.250.28.25 30232' +Oct 27 15:28:59.529: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.250.28.25 30232\nConnection to 10.250.28.25 30232 port [tcp/*] succeeded!\n" +Oct 27 15:28:59.529: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 15:28:59.529: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-5917 exec execpod-affinityd7d6q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.250.9.48 30232' +Oct 27 15:29:00.570: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.250.9.48 30232\nConnection to 10.250.9.48 30232 port [tcp/*] succeeded!\n" +Oct 27 15:29:00.571: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 15:29:00.571: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-5917 exec execpod-affinityd7d6q -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.250.28.25:30232/ ; done' +Oct 27 15:29:01.691: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.28.25:30232/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.28.25:30232/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.28.25:30232/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.28.25:30232/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.28.25:30232/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.28.25:30232/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.28.25:30232/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.28.25:30232/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.28.25:30232/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.28.25:30232/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.28.25:30232/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.28.25:30232/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.28.25:30232/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.28.25:30232/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.28.25:30232/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.28.25:30232/\n" +Oct 27 15:29:01.691: INFO: stdout: "\naffinity-nodeport-timeout-fg4m9\naffinity-nodeport-timeout-fg4m9\naffinity-nodeport-timeout-fg4m9\naffinity-nodeport-timeout-fg4m9\naffinity-nodeport-timeout-fg4m9\naffinity-nodeport-timeout-fg4m9\naffinity-nodeport-timeout-fg4m9\naffinity-nodeport-timeout-fg4m9\naffinity-nodeport-timeout-fg4m9\naffinity-nodeport-timeout-fg4m9\naffinity-nodeport-timeout-fg4m9\naffinity-nodeport-timeout-fg4m9\naffinity-nodeport-timeout-fg4m9\naffinity-nodeport-timeout-fg4m9\naffinity-nodeport-timeout-fg4m9\naffinity-nodeport-timeout-fg4m9" +Oct 27 15:29:01.691: INFO: Received response from host: affinity-nodeport-timeout-fg4m9 +Oct 27 15:29:01.691: INFO: Received response from host: affinity-nodeport-timeout-fg4m9 +Oct 27 15:29:01.691: INFO: Received response from host: affinity-nodeport-timeout-fg4m9 +Oct 27 15:29:01.691: INFO: Received response from host: affinity-nodeport-timeout-fg4m9 +Oct 27 15:29:01.691: INFO: Received response from host: affinity-nodeport-timeout-fg4m9 +Oct 27 15:29:01.691: INFO: Received response from host: affinity-nodeport-timeout-fg4m9 +Oct 27 15:29:01.691: INFO: Received response from host: affinity-nodeport-timeout-fg4m9 +Oct 27 15:29:01.691: INFO: Received response from host: affinity-nodeport-timeout-fg4m9 +Oct 27 15:29:01.691: INFO: Received response from host: affinity-nodeport-timeout-fg4m9 +Oct 27 15:29:01.691: INFO: Received response from host: affinity-nodeport-timeout-fg4m9 +Oct 27 15:29:01.691: INFO: Received response from host: affinity-nodeport-timeout-fg4m9 +Oct 27 15:29:01.691: INFO: Received response from host: affinity-nodeport-timeout-fg4m9 +Oct 27 15:29:01.691: INFO: Received response from host: affinity-nodeport-timeout-fg4m9 +Oct 27 15:29:01.691: INFO: Received response from host: affinity-nodeport-timeout-fg4m9 +Oct 27 15:29:01.691: INFO: Received response from host: affinity-nodeport-timeout-fg4m9 +Oct 27 15:29:01.691: INFO: Received response from host: affinity-nodeport-timeout-fg4m9 +Oct 27 15:29:01.691: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-5917 exec execpod-affinityd7d6q -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.250.28.25:30232/' +Oct 27 15:29:02.709: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.250.28.25:30232/\n" +Oct 27 15:29:02.709: INFO: stdout: "affinity-nodeport-timeout-fg4m9" +Oct 27 15:29:22.709: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-5917 exec execpod-affinityd7d6q -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.250.28.25:30232/' +Oct 27 15:29:23.758: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.250.28.25:30232/\n" +Oct 27 15:29:23.758: INFO: stdout: "affinity-nodeport-timeout-l7dvq" +Oct 27 15:29:23.758: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-5917, will wait for the garbage collector to delete the pods +Oct 27 15:29:24.134: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 91.214738ms +Oct 27 15:29:24.235: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 101.226948ms +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:29:26.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-5917" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":346,"completed":300,"skipped":5240,"failed":0} +SSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:29:27.017: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename resourcequota +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-3684 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Counting existing ResourceQuota +STEP: Creating a ResourceQuota +STEP: Ensuring resource quota status is calculated +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:29:35.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-3684" for this suite. +•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":346,"completed":301,"skipped":5248,"failed":0} +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl expose + should create services for rc [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:29:35.293: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-826 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should create services for rc [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating Agnhost RC +Oct 27 15:29:36.026: INFO: namespace kubectl-826 +Oct 27 15:29:36.026: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-826 create -f -' +Oct 27 15:29:36.549: INFO: stderr: "" +Oct 27 15:29:36.549: INFO: stdout: "replicationcontroller/agnhost-primary created\n" +STEP: Waiting for Agnhost primary to start. +Oct 27 15:29:37.640: INFO: Selector matched 1 pods for map[app:agnhost] +Oct 27 15:29:37.640: INFO: Found 0 / 1 +Oct 27 15:29:38.639: INFO: Selector matched 1 pods for map[app:agnhost] +Oct 27 15:29:38.640: INFO: Found 1 / 1 +Oct 27 15:29:38.640: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 +Oct 27 15:29:38.730: INFO: Selector matched 1 pods for map[app:agnhost] +Oct 27 15:29:38.730: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +Oct 27 15:29:38.730: INFO: wait on agnhost-primary startup in kubectl-826 +Oct 27 15:29:38.730: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-826 logs agnhost-primary-gdnw4 agnhost-primary' +Oct 27 15:29:39.206: INFO: stderr: "" +Oct 27 15:29:39.206: INFO: stdout: "Paused\n" +STEP: exposing RC +Oct 27 15:29:39.207: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-826 expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379' +Oct 27 15:29:39.637: INFO: stderr: "" +Oct 27 15:29:39.637: INFO: stdout: "service/rm2 exposed\n" +Oct 27 15:29:39.727: INFO: Service rm2 in namespace kubectl-826 found. +STEP: exposing service +Oct 27 15:29:41.909: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-826 expose service rm2 --name=rm3 --port=2345 --target-port=6379' +Oct 27 15:29:42.334: INFO: stderr: "" +Oct 27 15:29:42.334: INFO: stdout: "service/rm3 exposed\n" +Oct 27 15:29:42.424: INFO: Service rm3 in namespace kubectl-826 found. +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:29:44.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-826" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":346,"completed":302,"skipped":5267,"failed":0} +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should mutate pod and apply defaults after mutation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:29:44.882: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-2145 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 27 15:29:46.720: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770945386, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770945386, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770945386, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770945386, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 27 15:29:49.907: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should mutate pod and apply defaults after mutation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Registering the mutating pod webhook via the AdmissionRegistration API +STEP: create a pod that should be updated by the webhook +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:29:50.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-2145" for this suite. +STEP: Destroying namespace "webhook-2145-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":346,"completed":303,"skipped":5287,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Pods + should get a host IP [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:29:51.304: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename pods +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-3728 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:188 +[It] should get a host IP [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating pod +Oct 27 15:29:52.222: INFO: The status of Pod pod-hostip-344e197e-cba6-4860-bbb2-b0cda9f26d2b is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:29:54.313: INFO: The status of Pod pod-hostip-344e197e-cba6-4860-bbb2-b0cda9f26d2b is Running (Ready = true) +Oct 27 15:29:54.494: INFO: Pod pod-hostip-344e197e-cba6-4860-bbb2-b0cda9f26d2b has hostIP: 10.250.28.25 +[AfterEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:29:54.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-3728" for this suite. +•{"msg":"PASSED [sig-node] Pods should get a host IP [NodeConformance] [Conformance]","total":346,"completed":304,"skipped":5312,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-node] Container Runtime blackbox test on terminated container + should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:29:54.767: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-runtime +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-779 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the container +STEP: wait for the container to reach Succeeded +STEP: get the container status +STEP: the container should be terminated +STEP: the termination message should be set +Oct 27 15:29:57.957: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- +STEP: delete the container +[AfterEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:29:58.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-runtime-779" for this suite. +•{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":346,"completed":305,"skipped":5323,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-instrumentation] Events API + should delete a collection of events [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-instrumentation] Events API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:29:58.412: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename events +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in events-7465 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-instrumentation] Events API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 +[It] should delete a collection of events [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Create set of events +STEP: get a list of Events with a label in the current namespace +STEP: delete a list of events +Oct 27 15:29:59.510: INFO: requesting DeleteCollection of events +STEP: check that the list of events matches the requested quantity +[AfterEach] [sig-instrumentation] Events API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:29:59.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "events-7465" for this suite. +•{"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":346,"completed":306,"skipped":5360,"failed":0} +SS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a service. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:29:59.878: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename resourcequota +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-5170 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should create a ResourceQuota and capture the life of a service. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Counting existing ResourceQuota +STEP: Creating a ResourceQuota +STEP: Ensuring resource quota status is calculated +STEP: Creating a Service +STEP: Creating a NodePort Service +STEP: Not allowing a LoadBalancer Service with NodePort to be created that exceeds remaining quota +STEP: Ensuring resource quota status captures service creation +STEP: Deleting Services +STEP: Ensuring resource quota status released usage +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:30:12.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-5170" for this suite. +•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":346,"completed":307,"skipped":5362,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:30:12.828: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-6786 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating secret with name secret-test-cdd166b8-411c-4f93-ad79-57297a24e946 +STEP: Creating a pod to test consume secrets +Oct 27 15:30:13.748: INFO: Waiting up to 5m0s for pod "pod-secrets-6589614a-88e6-4447-b130-b342db55d28d" in namespace "secrets-6786" to be "Succeeded or Failed" +Oct 27 15:30:13.838: INFO: Pod "pod-secrets-6589614a-88e6-4447-b130-b342db55d28d": Phase="Pending", Reason="", readiness=false. Elapsed: 90.156945ms +Oct 27 15:30:15.929: INFO: Pod "pod-secrets-6589614a-88e6-4447-b130-b342db55d28d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.181213059s +STEP: Saw pod success +Oct 27 15:30:15.930: INFO: Pod "pod-secrets-6589614a-88e6-4447-b130-b342db55d28d" satisfied condition "Succeeded or Failed" +Oct 27 15:30:16.020: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod pod-secrets-6589614a-88e6-4447-b130-b342db55d28d container secret-volume-test: +STEP: delete the pod +Oct 27 15:30:16.251: INFO: Waiting for pod pod-secrets-6589614a-88e6-4447-b130-b342db55d28d to disappear +Oct 27 15:30:16.341: INFO: Pod pod-secrets-6589614a-88e6-4447-b130-b342db55d28d no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:30:16.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-6786" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":308,"skipped":5386,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Probing container + should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:30:16.612: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-probe +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-8791 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 +[It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating pod liveness-d9a0528c-15ba-43fc-b8ac-8ec95415a14a in namespace container-probe-8791 +Oct 27 15:30:19.622: INFO: Started pod liveness-d9a0528c-15ba-43fc-b8ac-8ec95415a14a in namespace container-probe-8791 +STEP: checking the pod's current state and verifying that restartCount is present +Oct 27 15:30:19.713: INFO: Initial restart count of pod liveness-d9a0528c-15ba-43fc-b8ac-8ec95415a14a is 0 +STEP: deleting the pod +[AfterEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:34:20.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-8791" for this suite. +•{"msg":"PASSED [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":346,"completed":309,"skipped":5421,"failed":0} +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:34:20.652: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-9024 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 27 15:34:21.487: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a5fc95dd-e205-4ef8-8144-3f0a1bf7e9ef" in namespace "downward-api-9024" to be "Succeeded or Failed" +Oct 27 15:34:21.577: INFO: Pod "downwardapi-volume-a5fc95dd-e205-4ef8-8144-3f0a1bf7e9ef": Phase="Pending", Reason="", readiness=false. Elapsed: 90.352889ms +Oct 27 15:34:23.669: INFO: Pod "downwardapi-volume-a5fc95dd-e205-4ef8-8144-3f0a1bf7e9ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.182087636s +STEP: Saw pod success +Oct 27 15:34:23.669: INFO: Pod "downwardapi-volume-a5fc95dd-e205-4ef8-8144-3f0a1bf7e9ef" satisfied condition "Succeeded or Failed" +Oct 27 15:34:23.759: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod downwardapi-volume-a5fc95dd-e205-4ef8-8144-3f0a1bf7e9ef container client-container: +STEP: delete the pod +Oct 27 15:34:23.991: INFO: Waiting for pod downwardapi-volume-a5fc95dd-e205-4ef8-8144-3f0a1bf7e9ef to disappear +Oct 27 15:34:24.082: INFO: Pod downwardapi-volume-a5fc95dd-e205-4ef8-8144-3f0a1bf7e9ef no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:34:24.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-9024" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":310,"skipped":5438,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + should be able to convert from CR v1 to CR v2 [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:34:24.353: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename crd-webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-webhook-8185 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 +STEP: Setting up server cert +STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication +STEP: Deploying the custom resource conversion webhook pod +STEP: Wait for the deployment to be ready +Oct 27 15:34:25.981: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770945665, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770945665, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770945665, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770945665, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-697cdbd8f4\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 27 15:34:29.168: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 +[It] should be able to convert from CR v1 to CR v2 [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:34:29.258: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Creating a v1 custom resource +STEP: v2 custom resource should be converted +[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:34:32.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-webhook-8185" for this suite. +[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 +•{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":346,"completed":311,"skipped":5454,"failed":0} + +------------------------------ +[sig-storage] ConfigMap + updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:34:33.026: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-8543 +STEP: Waiting for a default service account to be provisioned in namespace +[It] updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name configmap-test-upd-c49288be-9351-4664-8a87-8b1830c5653c +STEP: Creating the pod +Oct 27 15:34:34.239: INFO: The status of Pod pod-configmaps-23eacdfd-62b9-4b17-b89e-5a52bc89e0b2 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:34:36.331: INFO: The status of Pod pod-configmaps-23eacdfd-62b9-4b17-b89e-5a52bc89e0b2 is Running (Ready = true) +STEP: Updating configmap configmap-test-upd-c49288be-9351-4664-8a87-8b1830c5653c +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:34:38.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-8543" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":312,"skipped":5454,"failed":0} +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch + watch on custom resource definition objects [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:34:39.111: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename crd-watch +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-watch-7496 +STEP: Waiting for a default service account to be provisioned in namespace +[It] watch on custom resource definition objects [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:34:39.843: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Creating first CR +Oct 27 15:34:42.588: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-10-27T15:34:42Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-10-27T15:34:42Z]] name:name1 resourceVersion:43260 uid:8e7382f6-d4a9-4b17-a84a-6be0da15aac7] num:map[num1:9223372036854775807 num2:1000000]]} +STEP: Creating second CR +Oct 27 15:34:52.681: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-10-27T15:34:52Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-10-27T15:34:52Z]] name:name2 resourceVersion:43313 uid:addea1c5-694d-43dd-8621-1a887fbc4f1a] num:map[num1:9223372036854775807 num2:1000000]]} +STEP: Modifying first CR +Oct 27 15:35:02.775: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-10-27T15:34:42Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-10-27T15:35:02Z]] name:name1 resourceVersion:43357 uid:8e7382f6-d4a9-4b17-a84a-6be0da15aac7] num:map[num1:9223372036854775807 num2:1000000]]} +STEP: Modifying second CR +Oct 27 15:35:12.868: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-10-27T15:34:52Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-10-27T15:35:12Z]] name:name2 resourceVersion:43401 uid:addea1c5-694d-43dd-8621-1a887fbc4f1a] num:map[num1:9223372036854775807 num2:1000000]]} +STEP: Deleting first CR +Oct 27 15:35:22.961: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-10-27T15:34:42Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-10-27T15:35:02Z]] name:name1 resourceVersion:43444 uid:8e7382f6-d4a9-4b17-a84a-6be0da15aac7] num:map[num1:9223372036854775807 num2:1000000]]} +STEP: Deleting second CR +Oct 27 15:35:33.054: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-10-27T15:34:52Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-10-27T15:35:12Z]] name:name2 resourceVersion:43512 uid:addea1c5-694d-43dd-8621-1a887fbc4f1a] num:map[num1:9223372036854775807 num2:1000000]]} +[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:35:43.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-watch-7496" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":346,"completed":313,"skipped":5473,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Variable Expansion + should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:35:43.508: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename var-expansion +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in var-expansion-209 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pod with failed condition +STEP: updating the pod +Oct 27 15:37:45.467: INFO: Successfully updated pod "var-expansion-1a57ca29-138c-4e35-8b40-c7fd4a8ffaec" +STEP: waiting for pod running +STEP: deleting the pod gracefully +Oct 27 15:37:47.650: INFO: Deleting pod "var-expansion-1a57ca29-138c-4e35-8b40-c7fd4a8ffaec" in namespace "var-expansion-209" +Oct 27 15:37:47.742: INFO: Wait up to 5m0s for pod "var-expansion-1a57ca29-138c-4e35-8b40-c7fd4a8ffaec" to be fully deleted +[AfterEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:38:20.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-209" for this suite. +•{"msg":"PASSED [sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]","total":346,"completed":314,"skipped":5537,"failed":0} +SSSSS +------------------------------ +[sig-storage] Downward API volume + should provide podname only [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:38:20.273: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-4764 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should provide podname only [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 27 15:38:21.110: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d5c072eb-02bb-4077-8dcf-6eaad08707ad" in namespace "downward-api-4764" to be "Succeeded or Failed" +Oct 27 15:38:21.200: INFO: Pod "downwardapi-volume-d5c072eb-02bb-4077-8dcf-6eaad08707ad": Phase="Pending", Reason="", readiness=false. Elapsed: 90.113177ms +Oct 27 15:38:23.291: INFO: Pod "downwardapi-volume-d5c072eb-02bb-4077-8dcf-6eaad08707ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.18136993s +STEP: Saw pod success +Oct 27 15:38:23.291: INFO: Pod "downwardapi-volume-d5c072eb-02bb-4077-8dcf-6eaad08707ad" satisfied condition "Succeeded or Failed" +Oct 27 15:38:23.382: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod downwardapi-volume-d5c072eb-02bb-4077-8dcf-6eaad08707ad container client-container: +STEP: delete the pod +Oct 27 15:38:23.573: INFO: Waiting for pod downwardapi-volume-d5c072eb-02bb-4077-8dcf-6eaad08707ad to disappear +Oct 27 15:38:23.663: INFO: Pod downwardapi-volume-d5c072eb-02bb-4077-8dcf-6eaad08707ad no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:38:23.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-4764" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":346,"completed":315,"skipped":5542,"failed":0} + +------------------------------ +[sig-apps] DisruptionController + should block an eviction until the PDB is updated to allow it [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:38:23.934: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename disruption +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in disruption-9464 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 +[It] should block an eviction until the PDB is updated to allow it [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pdb that targets all three pods in a test replica set +STEP: Waiting for the pdb to be processed +STEP: First trying to evict a pod which shouldn't be evictable +STEP: Waiting for all pods to be running +Oct 27 15:38:25.027: INFO: running pods: 0 < 3 +Oct 27 15:38:27.118: INFO: running pods: 1 < 3 +STEP: locating a running pod +STEP: Updating the pdb to allow a pod to be evicted +STEP: Waiting for the pdb to be processed +STEP: Trying to evict the same pod we tried earlier which should now be evictable +STEP: Waiting for all pods to be running +STEP: Waiting for the pdb to observed all healthy pods +STEP: Patching the pdb to disallow a pod to be evicted +STEP: Waiting for the pdb to be processed +STEP: Waiting for all pods to be running +Oct 27 15:38:30.210: INFO: running pods: 2 < 3 +STEP: locating a running pod +STEP: Deleting the pdb to allow a pod to be evicted +STEP: Waiting for the pdb to be deleted +STEP: Trying to evict the same pod we tried earlier which should now be evictable +STEP: Waiting for all pods to be running +[AfterEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:38:32.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "disruption-9464" for this suite. +•{"msg":"PASSED [sig-apps] DisruptionController should block an eviction until the PDB is updated to allow it [Conformance]","total":346,"completed":316,"skipped":5542,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-network] DNS + should provide DNS for the cluster [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:38:33.034: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename dns +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-9524 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide DNS for the cluster [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9524.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9524.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done + +STEP: creating a pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Oct 27 15:38:36.973: INFO: DNS probes using dns-9524/dns-test-f3b6696b-d4e5-4404-a4f5-262eef6ee6ab succeeded + +STEP: deleting the pod +[AfterEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:38:37.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-9524" for this suite. +•{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":346,"completed":317,"skipped":5557,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:38:37.340: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-4385 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 27 15:38:38.168: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a9cc9acd-97c3-4337-9ac8-dcb6ff0ffc60" in namespace "projected-4385" to be "Succeeded or Failed" +Oct 27 15:38:38.258: INFO: Pod "downwardapi-volume-a9cc9acd-97c3-4337-9ac8-dcb6ff0ffc60": Phase="Pending", Reason="", readiness=false. Elapsed: 90.257832ms +Oct 27 15:38:40.349: INFO: Pod "downwardapi-volume-a9cc9acd-97c3-4337-9ac8-dcb6ff0ffc60": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.181388827s +STEP: Saw pod success +Oct 27 15:38:40.350: INFO: Pod "downwardapi-volume-a9cc9acd-97c3-4337-9ac8-dcb6ff0ffc60" satisfied condition "Succeeded or Failed" +Oct 27 15:38:40.439: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod downwardapi-volume-a9cc9acd-97c3-4337-9ac8-dcb6ff0ffc60 container client-container: +STEP: delete the pod +Oct 27 15:38:40.631: INFO: Waiting for pod downwardapi-volume-a9cc9acd-97c3-4337-9ac8-dcb6ff0ffc60 to disappear +Oct 27 15:38:40.721: INFO: Pod downwardapi-volume-a9cc9acd-97c3-4337-9ac8-dcb6ff0ffc60 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:38:40.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-4385" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":346,"completed":318,"skipped":5569,"failed":0} +SSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should mutate custom resource with different stored version [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:38:40.993: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-9497 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 27 15:38:43.407: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770945923, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770945923, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770945923, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770945923, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 27 15:38:46.594: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should mutate custom resource with different stored version [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:38:46.684: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6425-crds.webhook.example.com via the AdmissionRegistration API +STEP: Creating a custom resource while v1 is storage version +STEP: Patching Custom Resource Definition to set v2 as storage +STEP: Patching the custom resource while v2 is storage version +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:38:50.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-9497" for this suite. +STEP: Destroying namespace "webhook-9497-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":346,"completed":319,"skipped":5574,"failed":0} +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] NoExecuteTaintManager Multiple Pods [Serial] + evicts pods with minTolerationSeconds [Disruptive] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:38:50.901: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename taint-multiple-pods +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in taint-multiple-pods-4399 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/taints.go:345 +Oct 27 15:38:51.643: INFO: Waiting up to 1m0s for all nodes to be ready +Oct 27 15:39:52.285: INFO: Waiting for terminating namespaces to be deleted... +[It] evicts pods with minTolerationSeconds [Disruptive] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:39:52.375: INFO: Starting informer... +STEP: Starting pods... +Oct 27 15:39:52.652: INFO: Pod1 is running on ip-10-250-28-25.ec2.internal. Tainting Node +Oct 27 15:39:55.108: INFO: Pod2 is running on ip-10-250-28-25.ec2.internal. Tainting Node +STEP: Trying to apply a taint on the Node +STEP: verifying the node has the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute +STEP: Waiting for Pod1 and Pod2 to be deleted +Oct 27 15:40:00.968: INFO: Noticed Pod "taint-eviction-b1" gets evicted. +Oct 27 15:40:21.292: INFO: Noticed Pod "taint-eviction-b2" gets evicted. +STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute +[AfterEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:40:21.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "taint-multiple-pods-4399" for this suite. +•{"msg":"PASSED [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]","total":346,"completed":320,"skipped":5592,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl version + should check is all data is printed [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:40:21.751: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-2049 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should check is all data is printed [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:40:22.484: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-2049 version' +Oct 27 15:40:22.818: INFO: stderr: "" +Oct 27 15:40:22.818: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"22\", GitVersion:\"v1.22.2\", GitCommit:\"8b5a19147530eaac9476b0ab82980b4088bbc1b2\", GitTreeState:\"clean\", BuildDate:\"2021-09-15T21:38:50Z\", GoVersion:\"go1.16.8\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"22\", GitVersion:\"v1.22.2\", GitCommit:\"8b5a19147530eaac9476b0ab82980b4088bbc1b2\", GitTreeState:\"clean\", BuildDate:\"2021-09-15T21:32:41Z\", GoVersion:\"go1.16.8\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:40:22.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-2049" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":346,"completed":321,"skipped":5602,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Deployment + should run the lifecycle of a Deployment [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:40:23.001: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename deployment +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in deployment-8281 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 +[It] should run the lifecycle of a Deployment [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a Deployment +STEP: waiting for Deployment to be created +STEP: waiting for all Replicas to be Ready +Oct 27 15:40:24.096: INFO: observed Deployment test-deployment in namespace deployment-8281 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Oct 27 15:40:24.096: INFO: observed Deployment test-deployment in namespace deployment-8281 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Oct 27 15:40:24.096: INFO: observed Deployment test-deployment in namespace deployment-8281 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Oct 27 15:40:24.096: INFO: observed Deployment test-deployment in namespace deployment-8281 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Oct 27 15:40:24.096: INFO: observed Deployment test-deployment in namespace deployment-8281 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Oct 27 15:40:24.096: INFO: observed Deployment test-deployment in namespace deployment-8281 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Oct 27 15:40:24.096: INFO: observed Deployment test-deployment in namespace deployment-8281 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Oct 27 15:40:24.096: INFO: observed Deployment test-deployment in namespace deployment-8281 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Oct 27 15:40:26.164: INFO: observed Deployment test-deployment in namespace deployment-8281 with ReadyReplicas 1 and labels map[test-deployment-static:true] +Oct 27 15:40:26.164: INFO: observed Deployment test-deployment in namespace deployment-8281 with ReadyReplicas 1 and labels map[test-deployment-static:true] +Oct 27 15:40:26.170: INFO: observed Deployment test-deployment in namespace deployment-8281 with ReadyReplicas 2 and labels map[test-deployment-static:true] +STEP: patching the Deployment +Oct 27 15:40:26.352: INFO: observed event type ADDED +STEP: waiting for Replicas to scale +Oct 27 15:40:26.442: INFO: observed Deployment test-deployment in namespace deployment-8281 with ReadyReplicas 0 +Oct 27 15:40:26.442: INFO: observed Deployment test-deployment in namespace deployment-8281 with ReadyReplicas 0 +Oct 27 15:40:26.442: INFO: observed Deployment test-deployment in namespace deployment-8281 with ReadyReplicas 0 +Oct 27 15:40:26.442: INFO: observed Deployment test-deployment in namespace deployment-8281 with ReadyReplicas 0 +Oct 27 15:40:26.443: INFO: observed Deployment test-deployment in namespace deployment-8281 with ReadyReplicas 0 +Oct 27 15:40:26.443: INFO: observed Deployment test-deployment in namespace deployment-8281 with ReadyReplicas 0 +Oct 27 15:40:26.443: INFO: observed Deployment test-deployment in namespace deployment-8281 with ReadyReplicas 0 +Oct 27 15:40:26.443: INFO: observed Deployment test-deployment in namespace deployment-8281 with ReadyReplicas 0 +Oct 27 15:40:26.443: INFO: observed Deployment test-deployment in namespace deployment-8281 with ReadyReplicas 1 +Oct 27 15:40:26.443: INFO: observed Deployment test-deployment in namespace deployment-8281 with ReadyReplicas 1 +Oct 27 15:40:26.443: INFO: observed Deployment test-deployment in namespace deployment-8281 with ReadyReplicas 2 +Oct 27 15:40:26.443: INFO: observed Deployment test-deployment in namespace deployment-8281 with ReadyReplicas 2 +Oct 27 15:40:26.529: INFO: observed Deployment test-deployment in namespace deployment-8281 with ReadyReplicas 2 +Oct 27 15:40:26.529: INFO: observed Deployment test-deployment in namespace deployment-8281 with ReadyReplicas 2 +Oct 27 15:40:26.529: INFO: observed Deployment test-deployment in namespace deployment-8281 with ReadyReplicas 2 +Oct 27 15:40:26.529: INFO: observed Deployment test-deployment in namespace deployment-8281 with ReadyReplicas 2 +Oct 27 15:40:26.529: INFO: observed Deployment test-deployment in namespace deployment-8281 with ReadyReplicas 2 +Oct 27 15:40:26.529: INFO: observed Deployment test-deployment in namespace deployment-8281 with ReadyReplicas 2 +Oct 27 15:40:26.529: INFO: observed Deployment test-deployment in namespace deployment-8281 with ReadyReplicas 1 +Oct 27 15:40:26.529: INFO: observed Deployment test-deployment in namespace deployment-8281 with ReadyReplicas 1 +Oct 27 15:40:26.529: INFO: observed Deployment test-deployment in namespace deployment-8281 with ReadyReplicas 1 +Oct 27 15:40:26.529: INFO: observed Deployment test-deployment in namespace deployment-8281 with ReadyReplicas 1 +Oct 27 15:40:28.249: INFO: observed Deployment test-deployment in namespace deployment-8281 with ReadyReplicas 2 +Oct 27 15:40:28.249: INFO: observed Deployment test-deployment in namespace deployment-8281 with ReadyReplicas 2 +Oct 27 15:40:28.258: INFO: observed Deployment test-deployment in namespace deployment-8281 with ReadyReplicas 1 +STEP: listing Deployments +Oct 27 15:40:28.350: INFO: Found test-deployment with labels: map[test-deployment:patched test-deployment-static:true] +STEP: updating the Deployment +Oct 27 15:40:28.532: INFO: observed Deployment test-deployment in namespace deployment-8281 with ReadyReplicas 1 +STEP: fetching the DeploymentStatus +Oct 27 15:40:28.716: INFO: observed Deployment test-deployment in namespace deployment-8281 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] +Oct 27 15:40:28.717: INFO: observed Deployment test-deployment in namespace deployment-8281 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] +Oct 27 15:40:28.717: INFO: observed Deployment test-deployment in namespace deployment-8281 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] +Oct 27 15:40:28.717: INFO: observed Deployment test-deployment in namespace deployment-8281 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] +Oct 27 15:40:28.717: INFO: observed Deployment test-deployment in namespace deployment-8281 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] +Oct 27 15:40:28.717: INFO: observed Deployment test-deployment in namespace deployment-8281 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] +Oct 27 15:40:29.865: INFO: observed Deployment test-deployment in namespace deployment-8281 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] +Oct 27 15:40:30.353: INFO: observed Deployment test-deployment in namespace deployment-8281 with ReadyReplicas 3 and labels map[test-deployment:updated test-deployment-static:true] +Oct 27 15:40:30.363: INFO: observed Deployment test-deployment in namespace deployment-8281 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] +Oct 27 15:40:30.380: INFO: observed Deployment test-deployment in namespace deployment-8281 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] +Oct 27 15:40:32.469: INFO: observed Deployment test-deployment in namespace deployment-8281 with ReadyReplicas 3 and labels map[test-deployment:updated test-deployment-static:true] +STEP: patching the DeploymentStatus +STEP: fetching the DeploymentStatus +Oct 27 15:40:32.843: INFO: observed Deployment test-deployment in namespace deployment-8281 with ReadyReplicas 1 +Oct 27 15:40:32.843: INFO: observed Deployment test-deployment in namespace deployment-8281 with ReadyReplicas 1 +Oct 27 15:40:32.843: INFO: observed Deployment test-deployment in namespace deployment-8281 with ReadyReplicas 1 +Oct 27 15:40:32.843: INFO: observed Deployment test-deployment in namespace deployment-8281 with ReadyReplicas 1 +Oct 27 15:40:32.843: INFO: observed Deployment test-deployment in namespace deployment-8281 with ReadyReplicas 1 +Oct 27 15:40:32.843: INFO: observed Deployment test-deployment in namespace deployment-8281 with ReadyReplicas 1 +Oct 27 15:40:32.844: INFO: observed Deployment test-deployment in namespace deployment-8281 with ReadyReplicas 2 +Oct 27 15:40:32.844: INFO: observed Deployment test-deployment in namespace deployment-8281 with ReadyReplicas 3 +Oct 27 15:40:32.844: INFO: observed Deployment test-deployment in namespace deployment-8281 with ReadyReplicas 2 +Oct 27 15:40:32.844: INFO: observed Deployment test-deployment in namespace deployment-8281 with ReadyReplicas 2 +Oct 27 15:40:32.844: INFO: observed Deployment test-deployment in namespace deployment-8281 with ReadyReplicas 3 +STEP: deleting the Deployment +Oct 27 15:40:33.025: INFO: observed event type MODIFIED +Oct 27 15:40:33.026: INFO: observed event type MODIFIED +Oct 27 15:40:33.026: INFO: observed event type MODIFIED +Oct 27 15:40:33.026: INFO: observed event type MODIFIED +Oct 27 15:40:33.026: INFO: observed event type MODIFIED +Oct 27 15:40:33.026: INFO: observed event type MODIFIED +Oct 27 15:40:33.026: INFO: observed event type MODIFIED +Oct 27 15:40:33.026: INFO: observed event type MODIFIED +Oct 27 15:40:33.026: INFO: observed event type MODIFIED +Oct 27 15:40:33.026: INFO: observed event type MODIFIED +Oct 27 15:40:33.026: INFO: observed event type MODIFIED +Oct 27 15:40:33.026: INFO: observed event type MODIFIED +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 +Oct 27 15:40:33.116: INFO: Log out all the ReplicaSets if there is no deployment created +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:40:33.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-8281" for this suite. +•{"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":346,"completed":322,"skipped":5672,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-scheduling] LimitRange + should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-scheduling] LimitRange + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:40:33.389: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename limitrange +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in limitrange-4298 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a LimitRange +STEP: Setting up watch +STEP: Submitting a LimitRange +Oct 27 15:40:34.301: INFO: observed the limitRanges list +STEP: Verifying LimitRange creation was observed +STEP: Fetching the LimitRange to ensure it has proper values +Oct 27 15:40:34.481: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] +Oct 27 15:40:34.481: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] +STEP: Creating a Pod with no resource requirements +STEP: Ensuring Pod has resource requirements applied from LimitRange +Oct 27 15:40:34.666: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] +Oct 27 15:40:34.666: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] +STEP: Creating a Pod with partial resource requirements +STEP: Ensuring Pod has merged resource requirements applied from LimitRange +Oct 27 15:40:34.851: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] +Oct 27 15:40:34.851: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] +STEP: Failing to create a Pod with less than min resources +STEP: Failing to create a Pod with more than max resources +STEP: Updating a LimitRange +STEP: Verifying LimitRange updating is effective +STEP: Creating a Pod with less than former min resources +STEP: Failing to create a Pod with more than max resources +STEP: Deleting a LimitRange +STEP: Verifying the LimitRange was deleted +Oct 27 15:40:42.596: INFO: limitRange is already deleted +STEP: Creating a Pod with more than former max resources +[AfterEach] [sig-scheduling] LimitRange + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:40:42.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "limitrange-4298" for this suite. +•{"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":346,"completed":323,"skipped":5735,"failed":0} +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Docker Containers + should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Docker Containers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:40:42.968: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename containers +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in containers-4238 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test override command +Oct 27 15:40:43.797: INFO: Waiting up to 5m0s for pod "client-containers-419b0cd4-3f87-4f87-b881-34ec40805aee" in namespace "containers-4238" to be "Succeeded or Failed" +Oct 27 15:40:43.887: INFO: Pod "client-containers-419b0cd4-3f87-4f87-b881-34ec40805aee": Phase="Pending", Reason="", readiness=false. Elapsed: 90.147025ms +Oct 27 15:40:45.978: INFO: Pod "client-containers-419b0cd4-3f87-4f87-b881-34ec40805aee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.18134085s +STEP: Saw pod success +Oct 27 15:40:45.978: INFO: Pod "client-containers-419b0cd4-3f87-4f87-b881-34ec40805aee" satisfied condition "Succeeded or Failed" +Oct 27 15:40:46.068: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod client-containers-419b0cd4-3f87-4f87-b881-34ec40805aee container agnhost-container: +STEP: delete the pod +Oct 27 15:40:46.302: INFO: Waiting for pod client-containers-419b0cd4-3f87-4f87-b881-34ec40805aee to disappear +Oct 27 15:40:46.392: INFO: Pod client-containers-419b0cd4-3f87-4f87-b881-34ec40805aee no longer exists +[AfterEach] [sig-node] Docker Containers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:40:46.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "containers-4238" for this suite. +•{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":346,"completed":324,"skipped":5757,"failed":0} +SSSSS +------------------------------ +[sig-node] NoExecuteTaintManager Single Pod [Serial] + removing taint cancels eviction [Disruptive] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:40:46.663: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename taint-single-pod +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in taint-single-pod-902 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/taints.go:164 +Oct 27 15:40:47.395: INFO: Waiting up to 1m0s for all nodes to be ready +Oct 27 15:41:48.037: INFO: Waiting for terminating namespaces to be deleted... +[It] removing taint cancels eviction [Disruptive] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:41:48.127: INFO: Starting informer... +STEP: Starting pod... +Oct 27 15:41:48.314: INFO: Pod is running on ip-10-250-28-25.ec2.internal. Tainting Node +STEP: Trying to apply a taint on the Node +STEP: verifying the node has the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute +STEP: Waiting short time to make sure Pod is queued for deletion +Oct 27 15:41:48.589: INFO: Pod wasn't evicted. Proceeding +Oct 27 15:41:48.589: INFO: Removing taint from Node +STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute +STEP: Waiting some time to make sure that toleration time passed. +Oct 27 15:43:03.865: INFO: Pod wasn't evicted. Test successful +[AfterEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:43:03.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "taint-single-pod-902" for this suite. +•{"msg":"PASSED [sig-node] NoExecuteTaintManager Single Pod [Serial] removing taint cancels eviction [Disruptive] [Conformance]","total":346,"completed":325,"skipped":5762,"failed":0} + +------------------------------ +[sig-node] Probing container + should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:43:04.136: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-probe +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-1651 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 +[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating pod test-webserver-d46fb00f-38fa-44ea-b9e3-1caff3c82dca in namespace container-probe-1651 +Oct 27 15:43:07.146: INFO: Started pod test-webserver-d46fb00f-38fa-44ea-b9e3-1caff3c82dca in namespace container-probe-1651 +STEP: checking the pod's current state and verifying that restartCount is present +Oct 27 15:43:07.236: INFO: Initial restart count of pod test-webserver-d46fb00f-38fa-44ea-b9e3-1caff3c82dca is 0 +STEP: deleting the pod +[AfterEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:47:07.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-1651" for this suite. +•{"msg":"PASSED [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":346,"completed":326,"skipped":5762,"failed":0} +SS +------------------------------ +[sig-node] Lease + lease API should be available [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Lease + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:47:08.099: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename lease-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in lease-test-9022 +STEP: Waiting for a default service account to be provisioned in namespace +[It] lease API should be available [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-node] Lease + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:47:10.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "lease-test-9022" for this suite. +•{"msg":"PASSED [sig-node] Lease lease API should be available [Conformance]","total":346,"completed":327,"skipped":5764,"failed":0} +S +------------------------------ +[sig-api-machinery] Garbage collector + should delete pods created by rc when not orphaning [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:47:10.192: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename gc +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-3048 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should delete pods created by rc when not orphaning [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the rc +STEP: delete the rc +STEP: wait for all pods to be garbage collected +STEP: Gathering metrics +Oct 27 15:47:21.566: INFO: For apiserver_request_total: +For apiserver_request_latency_seconds: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:47:21.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +W1027 15:47:21.566523 5725 metrics_grabber.go:151] Can't find kube-controller-manager pod. Grabbing metrics from kube-controller-manager is disabled. +STEP: Destroying namespace "gc-3048" for this suite. +•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":346,"completed":328,"skipped":5765,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition + getting/updating/patching custom resource definition status sub-resource works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:47:21.751: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename custom-resource-definition +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in custom-resource-definition-9319 +STEP: Waiting for a default service account to be provisioned in namespace +[It] getting/updating/patching custom resource definition status sub-resource works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:47:22.483: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:47:23.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "custom-resource-definition-9319" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":346,"completed":329,"skipped":5795,"failed":0} +SS +------------------------------ +[sig-node] Security Context When creating a container with runAsUser + should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:47:23.456: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename security-context-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-test-2558 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 +[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:47:24.286: INFO: Waiting up to 5m0s for pod "busybox-user-65534-cad8a851-c315-4c72-868c-ffa78e28f046" in namespace "security-context-test-2558" to be "Succeeded or Failed" +Oct 27 15:47:24.377: INFO: Pod "busybox-user-65534-cad8a851-c315-4c72-868c-ffa78e28f046": Phase="Pending", Reason="", readiness=false. Elapsed: 90.292035ms +Oct 27 15:47:26.468: INFO: Pod "busybox-user-65534-cad8a851-c315-4c72-868c-ffa78e28f046": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.18136455s +Oct 27 15:47:26.468: INFO: Pod "busybox-user-65534-cad8a851-c315-4c72-868c-ffa78e28f046" satisfied condition "Succeeded or Failed" +[AfterEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:47:26.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "security-context-test-2558" for this suite. +•{"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":330,"skipped":5797,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Probing container + with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:47:26.739: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-probe +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-3727 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 +[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:48:27.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-3727" for this suite. +•{"msg":"PASSED [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":346,"completed":331,"skipped":5835,"failed":0} +SSS +------------------------------ +[sig-apps] Deployment + deployment should delete old replica sets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:48:27.929: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename deployment +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in deployment-2679 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 +[It] deployment should delete old replica sets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:48:28.842: INFO: Pod name cleanup-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running +Oct 27 15:48:31.024: INFO: Creating deployment test-cleanup-deployment +STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 +Oct 27 15:48:33.746: INFO: Deployment "test-cleanup-deployment": +&Deployment{ObjectMeta:{test-cleanup-deployment deployment-2679 dd476c28-9715-4d06-a869-80a97023990d 47827 1 2021-10-27 15:48:31 +0000 UTC map[name:cleanup-pod] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2021-10-27 15:48:31 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 15:48:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc008117638 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-10-27 15:48:31 +0000 UTC,LastTransitionTime:2021-10-27 15:48:31 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-cleanup-deployment-5b4d99b59b" has successfully progressed.,LastUpdateTime:2021-10-27 15:48:32 +0000 UTC,LastTransitionTime:2021-10-27 15:48:31 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} + +Oct 27 15:48:33.837: INFO: New ReplicaSet "test-cleanup-deployment-5b4d99b59b" of Deployment "test-cleanup-deployment": +&ReplicaSet{ObjectMeta:{test-cleanup-deployment-5b4d99b59b deployment-2679 2a4bd5d7-ee67-43e9-8d65-fe3f27eac708 47820 1 2021-10-27 15:48:31 +0000 UTC map[name:cleanup-pod pod-template-hash:5b4d99b59b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment dd476c28-9715-4d06-a869-80a97023990d 0xc008117a07 0xc008117a08}] [] [{kube-controller-manager Update apps/v1 2021-10-27 15:48:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dd476c28-9715-4d06-a869-80a97023990d\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 15:48:32 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 5b4d99b59b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:5b4d99b59b] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc008117ab8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} +Oct 27 15:48:33.927: INFO: Pod "test-cleanup-deployment-5b4d99b59b-cf7xb" is available: +&Pod{ObjectMeta:{test-cleanup-deployment-5b4d99b59b-cf7xb test-cleanup-deployment-5b4d99b59b- deployment-2679 26812cac-bb2e-40d9-8841-fa04a5a0da44 47819 0 2021-10-27 15:48:31 +0000 UTC map[name:cleanup-pod pod-template-hash:5b4d99b59b] map[cni.projectcalico.org/containerID:92606316ab9832240be09a758c8aab1a841c192c56e0bb4b5ecef3a6f6571f94 cni.projectcalico.org/podIP:100.96.1.146/32 cni.projectcalico.org/podIPs:100.96.1.146/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet test-cleanup-deployment-5b4d99b59b 2a4bd5d7-ee67-43e9-8d65-fe3f27eac708 0xc0050c9ba7 0xc0050c9ba8}] [] [{kube-controller-manager Update v1 2021-10-27 15:48:31 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2a4bd5d7-ee67-43e9-8d65-fe3f27eac708\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-27 15:48:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-27 15:48:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.1.146\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-t8g9j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tm94z-0j6.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t8g9j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-28-25.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:48:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:48:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:48:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:48:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.28.25,PodIP:100.96.1.146,StartTime:2021-10-27 15:48:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-27 15:48:32 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:docker://f4c30b78635162987aefe1485e943fa9f79018f2fbe3edf396f9ede16ee9d66b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.1.146,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:48:33.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-2679" for this suite. +•{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":346,"completed":332,"skipped":5838,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Guestbook application + should create and stop a working application [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:48:34.198: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-212 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should create and stop a working application [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating all guestbook components +Oct 27 15:48:34.933: INFO: apiVersion: v1 +kind: Service +metadata: + name: agnhost-replica + labels: + app: agnhost + role: replica + tier: backend +spec: + ports: + - port: 6379 + selector: + app: agnhost + role: replica + tier: backend + +Oct 27 15:48:34.933: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-212 create -f -' +Oct 27 15:48:36.113: INFO: stderr: "" +Oct 27 15:48:36.113: INFO: stdout: "service/agnhost-replica created\n" +Oct 27 15:48:36.113: INFO: apiVersion: v1 +kind: Service +metadata: + name: agnhost-primary + labels: + app: agnhost + role: primary + tier: backend +spec: + ports: + - port: 6379 + targetPort: 6379 + selector: + app: agnhost + role: primary + tier: backend + +Oct 27 15:48:36.113: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-212 create -f -' +Oct 27 15:48:36.660: INFO: stderr: "" +Oct 27 15:48:36.660: INFO: stdout: "service/agnhost-primary created\n" +Oct 27 15:48:36.660: INFO: apiVersion: v1 +kind: Service +metadata: + name: frontend + labels: + app: guestbook + tier: frontend +spec: + # if your cluster supports it, uncomment the following to automatically create + # an external load-balanced IP for the frontend service. + # type: LoadBalancer + ports: + - port: 80 + selector: + app: guestbook + tier: frontend + +Oct 27 15:48:36.660: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-212 create -f -' +Oct 27 15:48:37.208: INFO: stderr: "" +Oct 27 15:48:37.208: INFO: stdout: "service/frontend created\n" +Oct 27 15:48:37.208: INFO: apiVersion: apps/v1 +kind: Deployment +metadata: + name: frontend +spec: + replicas: 3 + selector: + matchLabels: + app: guestbook + tier: frontend + template: + metadata: + labels: + app: guestbook + tier: frontend + spec: + containers: + - name: guestbook-frontend + image: k8s.gcr.io/e2e-test-images/agnhost:2.32 + args: [ "guestbook", "--backend-port", "6379" ] + resources: + requests: + cpu: 100m + memory: 100Mi + ports: + - containerPort: 80 + +Oct 27 15:48:37.209: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-212 create -f -' +Oct 27 15:48:37.764: INFO: stderr: "" +Oct 27 15:48:37.764: INFO: stdout: "deployment.apps/frontend created\n" +Oct 27 15:48:37.764: INFO: apiVersion: apps/v1 +kind: Deployment +metadata: + name: agnhost-primary +spec: + replicas: 1 + selector: + matchLabels: + app: agnhost + role: primary + tier: backend + template: + metadata: + labels: + app: agnhost + role: primary + tier: backend + spec: + containers: + - name: primary + image: k8s.gcr.io/e2e-test-images/agnhost:2.32 + args: [ "guestbook", "--http-port", "6379" ] + resources: + requests: + cpu: 100m + memory: 100Mi + ports: + - containerPort: 6379 + +Oct 27 15:48:37.764: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-212 create -f -' +Oct 27 15:48:38.312: INFO: stderr: "" +Oct 27 15:48:38.312: INFO: stdout: "deployment.apps/agnhost-primary created\n" +Oct 27 15:48:38.312: INFO: apiVersion: apps/v1 +kind: Deployment +metadata: + name: agnhost-replica +spec: + replicas: 2 + selector: + matchLabels: + app: agnhost + role: replica + tier: backend + template: + metadata: + labels: + app: agnhost + role: replica + tier: backend + spec: + containers: + - name: replica + image: k8s.gcr.io/e2e-test-images/agnhost:2.32 + args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] + resources: + requests: + cpu: 100m + memory: 100Mi + ports: + - containerPort: 6379 + +Oct 27 15:48:38.312: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-212 create -f -' +Oct 27 15:48:38.843: INFO: stderr: "" +Oct 27 15:48:38.843: INFO: stdout: "deployment.apps/agnhost-replica created\n" +STEP: validating guestbook app +Oct 27 15:48:38.843: INFO: Waiting for all frontend pods to be Running. +Oct 27 15:48:43.944: INFO: Waiting for frontend to serve content. +Oct 27 15:48:44.089: INFO: Trying to add a new entry to the guestbook. +Oct 27 15:48:44.237: INFO: Verifying that added entry can be retrieved. +STEP: using delete to clean up resources +Oct 27 15:48:44.373: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-212 delete --grace-period=0 --force -f -' +Oct 27 15:48:44.790: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Oct 27 15:48:44.790: INFO: stdout: "service \"agnhost-replica\" force deleted\n" +STEP: using delete to clean up resources +Oct 27 15:48:44.790: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-212 delete --grace-period=0 --force -f -' +Oct 27 15:48:45.205: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Oct 27 15:48:45.205: INFO: stdout: "service \"agnhost-primary\" force deleted\n" +STEP: using delete to clean up resources +Oct 27 15:48:45.206: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-212 delete --grace-period=0 --force -f -' +Oct 27 15:48:45.622: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Oct 27 15:48:45.622: INFO: stdout: "service \"frontend\" force deleted\n" +STEP: using delete to clean up resources +Oct 27 15:48:45.622: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-212 delete --grace-period=0 --force -f -' +Oct 27 15:48:46.036: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Oct 27 15:48:46.036: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" +STEP: using delete to clean up resources +Oct 27 15:48:46.036: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-212 delete --grace-period=0 --force -f -' +Oct 27 15:48:46.456: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Oct 27 15:48:46.456: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" +STEP: using delete to clean up resources +Oct 27 15:48:46.456: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tm94z-0j6.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-212 delete --grace-period=0 --force -f -' +Oct 27 15:48:46.870: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Oct 27 15:48:46.870: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:48:46.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-212" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":346,"completed":333,"skipped":5862,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-auth] ServiceAccounts + should guarantee kube-root-ca.crt exist in any namespace [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:48:47.141: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename svcaccounts +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in svcaccounts-9822 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should guarantee kube-root-ca.crt exist in any namespace [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:48:47.962: INFO: Got root ca configmap in namespace "svcaccounts-9822" +Oct 27 15:48:48.053: INFO: Deleted root ca configmap in namespace "svcaccounts-9822" +STEP: waiting for a new root ca configmap created +Oct 27 15:48:48.644: INFO: Recreated root ca configmap in namespace "svcaccounts-9822" +Oct 27 15:48:48.735: INFO: Updated root ca configmap in namespace "svcaccounts-9822" +STEP: waiting for the root ca configmap reconciled +Oct 27 15:48:49.326: INFO: Reconciled root ca configmap in namespace "svcaccounts-9822" +[AfterEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:48:49.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svcaccounts-9822" for this suite. +•{"msg":"PASSED [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]","total":346,"completed":334,"skipped":5886,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-network] Services + should test the lifecycle of an Endpoint [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:48:49.597: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-1819 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should test the lifecycle of an Endpoint [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating an Endpoint +STEP: waiting for available Endpoint +STEP: listing all Endpoints +STEP: updating the Endpoint +STEP: fetching the Endpoint +STEP: patching the Endpoint +STEP: fetching the Endpoint +STEP: deleting the Endpoint by Collection +STEP: waiting for Endpoint deletion +STEP: fetching the Endpoint +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:48:51.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-1819" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":346,"completed":335,"skipped":5898,"failed":0} +SSSSS +------------------------------ +[sig-scheduling] SchedulerPredicates [Serial] + validates that NodeSelector is respected if matching [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:48:51.687: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename sched-pred +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-pred-9361 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 +Oct 27 15:48:52.418: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready +Oct 27 15:48:52.600: INFO: Waiting for terminating namespaces to be deleted... +Oct 27 15:48:52.690: INFO: +Logging pods the apiserver thinks is on node ip-10-250-28-25.ec2.internal before test +Oct 27 15:48:52.784: INFO: apiserver-proxy-kb6fx from kube-system started at 2021-10-27 13:53:35 +0000 UTC (2 container statuses recorded) +Oct 27 15:48:52.784: INFO: Container proxy ready: true, restart count 0 +Oct 27 15:48:52.784: INFO: Container sidecar ready: true, restart count 0 +Oct 27 15:48:52.784: INFO: blackbox-exporter-65c549b94c-kw2mt from kube-system started at 2021-10-27 14:00:28 +0000 UTC (1 container statuses recorded) +Oct 27 15:48:52.784: INFO: Container blackbox-exporter ready: true, restart count 0 +Oct 27 15:48:52.784: INFO: calico-node-pqn8p from kube-system started at 2021-10-27 13:55:42 +0000 UTC (1 container statuses recorded) +Oct 27 15:48:52.784: INFO: Container calico-node ready: true, restart count 0 +Oct 27 15:48:52.784: INFO: csi-driver-node-ddm2w from kube-system started at 2021-10-27 13:53:35 +0000 UTC (3 container statuses recorded) +Oct 27 15:48:52.784: INFO: Container csi-driver ready: true, restart count 0 +Oct 27 15:48:52.784: INFO: Container csi-liveness-probe ready: true, restart count 0 +Oct 27 15:48:52.784: INFO: Container csi-node-driver-registrar ready: true, restart count 0 +Oct 27 15:48:52.784: INFO: kube-proxy-tnk6p from kube-system started at 2021-10-27 13:56:34 +0000 UTC (2 container statuses recorded) +Oct 27 15:48:52.784: INFO: Container conntrack-fix ready: true, restart count 0 +Oct 27 15:48:52.784: INFO: Container kube-proxy ready: true, restart count 0 +Oct 27 15:48:52.784: INFO: node-exporter-jhkvj from kube-system started at 2021-10-27 13:53:35 +0000 UTC (1 container statuses recorded) +Oct 27 15:48:52.784: INFO: Container node-exporter ready: true, restart count 0 +Oct 27 15:48:52.784: INFO: node-problem-detector-lscmn from kube-system started at 2021-10-27 14:20:29 +0000 UTC (1 container statuses recorded) +Oct 27 15:48:52.784: INFO: Container node-problem-detector ready: true, restart count 0 +Oct 27 15:48:52.784: INFO: +Logging pods the apiserver thinks is on node ip-10-250-9-48.ec2.internal before test +Oct 27 15:48:52.966: INFO: addons-nginx-ingress-controller-b7784495c-zzbhb from kube-system started at 2021-10-27 15:39:55 +0000 UTC (1 container statuses recorded) +Oct 27 15:48:52.966: INFO: Container nginx-ingress-controller ready: true, restart count 0 +Oct 27 15:48:52.966: INFO: addons-nginx-ingress-nginx-ingress-k8s-backend-56d9d84c8c-bnwpb from kube-system started at 2021-10-27 13:53:42 +0000 UTC (1 container statuses recorded) +Oct 27 15:48:52.966: INFO: Container nginx-ingress-nginx-ingress-k8s-backend ready: true, restart count 0 +Oct 27 15:48:52.966: INFO: apiserver-proxy-4k9m7 from kube-system started at 2021-10-27 13:53:22 +0000 UTC (2 container statuses recorded) +Oct 27 15:48:52.966: INFO: Container proxy ready: true, restart count 0 +Oct 27 15:48:52.966: INFO: Container sidecar ready: true, restart count 0 +Oct 27 15:48:52.966: INFO: calico-kube-controllers-56bcbfb5c5-nhtm5 from kube-system started at 2021-10-27 13:53:22 +0000 UTC (1 container statuses recorded) +Oct 27 15:48:52.966: INFO: Container calico-kube-controllers ready: true, restart count 0 +Oct 27 15:48:52.966: INFO: calico-node-pcdrk from kube-system started at 2021-10-27 13:55:32 +0000 UTC (1 container statuses recorded) +Oct 27 15:48:52.966: INFO: Container calico-node ready: true, restart count 0 +Oct 27 15:48:52.966: INFO: calico-node-vertical-autoscaler-785b5f968-89m6j from kube-system started at 2021-10-27 13:53:42 +0000 UTC (1 container statuses recorded) +Oct 27 15:48:52.966: INFO: Container autoscaler ready: true, restart count 0 +Oct 27 15:48:52.966: INFO: calico-typha-deploy-546b97d4b5-xrvqz from kube-system started at 2021-10-27 13:53:22 +0000 UTC (1 container statuses recorded) +Oct 27 15:48:52.966: INFO: Container calico-typha ready: true, restart count 0 +Oct 27 15:48:52.966: INFO: calico-typha-horizontal-autoscaler-5b58bb446c-gbzpp from kube-system started at 2021-10-27 13:53:42 +0000 UTC (1 container statuses recorded) +Oct 27 15:48:52.966: INFO: Container autoscaler ready: true, restart count 0 +Oct 27 15:48:52.966: INFO: calico-typha-vertical-autoscaler-5c9655cddd-wwsqk from kube-system started at 2021-10-27 13:53:42 +0000 UTC (1 container statuses recorded) +Oct 27 15:48:52.966: INFO: Container autoscaler ready: true, restart count 0 +Oct 27 15:48:52.966: INFO: coredns-746d4d76f8-nqpnh from kube-system started at 2021-10-27 13:53:42 +0000 UTC (1 container statuses recorded) +Oct 27 15:48:52.966: INFO: Container coredns ready: true, restart count 0 +Oct 27 15:48:52.966: INFO: coredns-746d4d76f8-zksdl from kube-system started at 2021-10-27 13:53:42 +0000 UTC (1 container statuses recorded) +Oct 27 15:48:52.966: INFO: Container coredns ready: true, restart count 0 +Oct 27 15:48:52.966: INFO: csi-driver-node-cwstr from kube-system started at 2021-10-27 13:53:22 +0000 UTC (3 container statuses recorded) +Oct 27 15:48:52.966: INFO: Container csi-driver ready: true, restart count 0 +Oct 27 15:48:52.966: INFO: Container csi-liveness-probe ready: true, restart count 0 +Oct 27 15:48:52.966: INFO: Container csi-node-driver-registrar ready: true, restart count 0 +Oct 27 15:48:52.966: INFO: kube-proxy-d8j27 from kube-system started at 2021-10-27 13:56:29 +0000 UTC (2 container statuses recorded) +Oct 27 15:48:52.966: INFO: Container conntrack-fix ready: true, restart count 0 +Oct 27 15:48:52.966: INFO: Container kube-proxy ready: true, restart count 0 +Oct 27 15:48:52.966: INFO: metrics-server-98f7b76bf-s6v4j from kube-system started at 2021-10-27 13:53:42 +0000 UTC (1 container statuses recorded) +Oct 27 15:48:52.966: INFO: Container metrics-server ready: true, restart count 0 +Oct 27 15:48:52.966: INFO: node-exporter-27q2j from kube-system started at 2021-10-27 13:53:22 +0000 UTC (1 container statuses recorded) +Oct 27 15:48:52.966: INFO: Container node-exporter ready: true, restart count 0 +Oct 27 15:48:52.966: INFO: node-problem-detector-66fvb from kube-system started at 2021-10-27 14:20:29 +0000 UTC (1 container statuses recorded) +Oct 27 15:48:52.966: INFO: Container node-problem-detector ready: true, restart count 0 +Oct 27 15:48:52.966: INFO: vpn-shoot-77846799c6-lvhrh from kube-system started at 2021-10-27 13:53:42 +0000 UTC (1 container statuses recorded) +Oct 27 15:48:52.966: INFO: Container vpn-shoot ready: true, restart count 0 +Oct 27 15:48:52.966: INFO: dashboard-metrics-scraper-7ccbfc448f-8vkgz from kubernetes-dashboard started at 2021-10-27 13:53:42 +0000 UTC (1 container statuses recorded) +Oct 27 15:48:52.966: INFO: Container dashboard-metrics-scraper ready: true, restart count 0 +Oct 27 15:48:52.966: INFO: kubernetes-dashboard-5484586d8f-2hskr from kubernetes-dashboard started at 2021-10-27 13:53:42 +0000 UTC (1 container statuses recorded) +Oct 27 15:48:52.966: INFO: Container kubernetes-dashboard ready: true, restart count 0 +[It] validates that NodeSelector is respected if matching [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Trying to launch a pod without a label to get a node which can launch it. +STEP: Explicitly delete pod here to free the resource it takes. +STEP: Trying to apply a random label on the found node. +STEP: verifying the node has the label kubernetes.io/e2e-cddcd244-efee-4157-9c78-04eb41cde200 42 +STEP: Trying to relaunch the pod, now with labels. +STEP: removing the label kubernetes.io/e2e-cddcd244-efee-4157-9c78-04eb41cde200 off the node ip-10-250-28-25.ec2.internal +STEP: verifying the node doesn't have the label kubernetes.io/e2e-cddcd244-efee-4157-9c78-04eb41cde200 +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:48:58.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-pred-9361" for this suite. +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 +•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":346,"completed":336,"skipped":5903,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Deployment + deployment should support proportional scaling [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:48:58.623: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename deployment +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in deployment-7285 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 +[It] deployment should support proportional scaling [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:48:59.355: INFO: Creating deployment "webserver-deployment" +Oct 27 15:48:59.446: INFO: Waiting for observed generation 1 +Oct 27 15:48:59.536: INFO: Waiting for all required pods to come up +Oct 27 15:48:59.627: INFO: Pod name httpd: Found 10 pods out of 10 +STEP: ensuring each pod is running +Oct 27 15:49:03.897: INFO: Waiting for deployment "webserver-deployment" to complete +Oct 27 15:49:04.077: INFO: Updating deployment "webserver-deployment" with a non-existent image +Oct 27 15:49:04.258: INFO: Updating deployment webserver-deployment +Oct 27 15:49:04.258: INFO: Waiting for observed generation 2 +Oct 27 15:49:04.401: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 +Oct 27 15:49:04.491: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 +Oct 27 15:49:04.600: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas +Oct 27 15:49:04.903: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 +Oct 27 15:49:04.903: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 +Oct 27 15:49:05.001: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas +Oct 27 15:49:05.191: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas +Oct 27 15:49:05.191: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 +Oct 27 15:49:05.372: INFO: Updating deployment webserver-deployment +Oct 27 15:49:05.372: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas +Oct 27 15:49:05.601: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 +Oct 27 15:49:05.701: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 +Oct 27 15:49:06.001: INFO: Deployment "webserver-deployment": +&Deployment{ObjectMeta:{webserver-deployment deployment-7285 a8e03e2e-5c84-4d06-ac2b-0fb28da7e03b 48399 3 2021-10-27 15:48:59 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-10-27 15:48:59 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 15:49:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc006ffaba8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2021-10-27 15:49:05 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-795d758f88" is progressing.,LastUpdateTime:2021-10-27 15:49:05 +0000 UTC,LastTransitionTime:2021-10-27 15:48:59 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} + +Oct 27 15:49:06.091: INFO: New ReplicaSet "webserver-deployment-795d758f88" of Deployment "webserver-deployment": +&ReplicaSet{ObjectMeta:{webserver-deployment-795d758f88 deployment-7285 1722245e-41c2-4769-bb36-c920e51c9cd9 48392 3 2021-10-27 15:49:04 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment a8e03e2e-5c84-4d06-ac2b-0fb28da7e03b 0xc004cd12a7 0xc004cd12a8}] [] [{kube-controller-manager Update apps/v1 2021-10-27 15:49:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a8e03e2e-5c84-4d06-ac2b-0fb28da7e03b\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 15:49:04 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 795d758f88,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004cd1348 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Oct 27 15:49:06.091: INFO: All old ReplicaSets of Deployment "webserver-deployment": +Oct 27 15:49:06.092: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-847dcfb7fb deployment-7285 2d989e7d-3b8f-45c6-8ffb-50e38de8031b 48393 3 2021-10-27 15:48:59 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment a8e03e2e-5c84-4d06-ac2b-0fb28da7e03b 0xc004cd13a7 0xc004cd13a8}] [] [{kube-controller-manager Update apps/v1 2021-10-27 15:48:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a8e03e2e-5c84-4d06-ac2b-0fb28da7e03b\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 15:49:02 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 847dcfb7fb,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004cd1438 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} +Oct 27 15:49:06.273: INFO: Pod "webserver-deployment-795d758f88-2szx7" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-2szx7 webserver-deployment-795d758f88- deployment-7285 eeef0657-1166-4cf9-a5f9-152f1efdd1fb 48381 0 2021-10-27 15:49:05 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 1722245e-41c2-4769-bb36-c920e51c9cd9 0xc004cd1927 0xc004cd1928}] [] [{kube-controller-manager Update v1 2021-10-27 15:49:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1722245e-41c2-4769-bb36-c920e51c9cd9\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:49:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-xl4cd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tm94z-0j6.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xl4cd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-9-48.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.9.48,PodIP:,StartTime:2021-10-27 15:49:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:49:06.273: INFO: Pod "webserver-deployment-795d758f88-2thxq" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-2thxq webserver-deployment-795d758f88- deployment-7285 9002b161-b794-40b2-9725-768c19fc16ab 48383 0 2021-10-27 15:49:04 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[cni.projectcalico.org/containerID:f3656f4fd987f5f4568d5fe54f5a98a91b80dc0182d16b517fa0ff9c25b892e8 cni.projectcalico.org/podIP:100.96.0.92/32 cni.projectcalico.org/podIPs:100.96.0.92/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 1722245e-41c2-4769-bb36-c920e51c9cd9 0xc004cd1b20 0xc004cd1b21}] [] [{kube-controller-manager Update v1 2021-10-27 15:49:04 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1722245e-41c2-4769-bb36-c920e51c9cd9\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:49:04 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2021-10-27 15:49:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-jkgp4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tm94z-0j6.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jkgp4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-9-48.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:04 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:04 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.9.48,PodIP:,StartTime:2021-10-27 15:49:04 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:49:06.273: INFO: Pod "webserver-deployment-795d758f88-44szk" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-44szk webserver-deployment-795d758f88- deployment-7285 670f3cd1-198f-4cbe-a19e-6c9564f546f4 48372 0 2021-10-27 15:49:05 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 1722245e-41c2-4769-bb36-c920e51c9cd9 0xc004cd1d30 0xc004cd1d31}] [] [{kube-controller-manager Update v1 2021-10-27 15:49:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1722245e-41c2-4769-bb36-c920e51c9cd9\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:49:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-n6kdm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tm94z-0j6.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n6kdm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-28-25.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.28.25,PodIP:,StartTime:2021-10-27 15:49:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:49:06.274: INFO: Pod "webserver-deployment-795d758f88-5bjv5" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-5bjv5 webserver-deployment-795d758f88- deployment-7285 570a6728-7dae-46f0-9a03-ba1e430a509b 48386 0 2021-10-27 15:49:04 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[cni.projectcalico.org/containerID:3ccb8f85ac297e74da857cfdeacf82ef8bb7585ea83e477222cc9427e0285dc0 cni.projectcalico.org/podIP:100.96.0.93/32 cni.projectcalico.org/podIPs:100.96.0.93/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 1722245e-41c2-4769-bb36-c920e51c9cd9 0xc004cd1f27 0xc004cd1f28}] [] [{kube-controller-manager Update v1 2021-10-27 15:49:04 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1722245e-41c2-4769-bb36-c920e51c9cd9\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:49:04 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2021-10-27 15:49:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-l6cnr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tm94z-0j6.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l6cnr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-9-48.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:04 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:04 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.9.48,PodIP:,StartTime:2021-10-27 15:49:04 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:49:06.274: INFO: Pod "webserver-deployment-795d758f88-bqb7s" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-bqb7s webserver-deployment-795d758f88- deployment-7285 6ab2bbcc-665f-44eb-9013-d8df7d218780 48402 0 2021-10-27 15:49:04 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[cni.projectcalico.org/containerID:86a6c4f965b7ca0e5ab069ba33fedcd839d4d88026c79d16c7cccdd9a644d11a cni.projectcalico.org/podIP:100.96.1.159/32 cni.projectcalico.org/podIPs:100.96.1.159/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 1722245e-41c2-4769-bb36-c920e51c9cd9 0xc004dee750 0xc004dee751}] [] [{kube-controller-manager Update v1 2021-10-27 15:49:04 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1722245e-41c2-4769-bb36-c920e51c9cd9\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:49:04 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2021-10-27 15:49:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-xshd4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tm94z-0j6.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xshd4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-28-25.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:04 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:04 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.28.25,PodIP:,StartTime:2021-10-27 15:49:04 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:49:06.274: INFO: Pod "webserver-deployment-795d758f88-cjgw6" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-cjgw6 webserver-deployment-795d758f88- deployment-7285 c12b1d09-4a60-49d9-8b8c-b5e0b6c5f96a 48374 0 2021-10-27 15:49:05 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 1722245e-41c2-4769-bb36-c920e51c9cd9 0xc004def1e7 0xc004def1e8}] [] [{kube-controller-manager Update v1 2021-10-27 15:49:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1722245e-41c2-4769-bb36-c920e51c9cd9\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:49:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-rwflj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tm94z-0j6.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rwflj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-9-48.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.9.48,PodIP:,StartTime:2021-10-27 15:49:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:49:06.274: INFO: Pod "webserver-deployment-795d758f88-j44x7" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-j44x7 webserver-deployment-795d758f88- deployment-7285 2416d625-a873-486e-bba7-2c740c47c8e0 48315 0 2021-10-27 15:49:04 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 1722245e-41c2-4769-bb36-c920e51c9cd9 0xc004def530 0xc004def531}] [] [{kube-controller-manager Update v1 2021-10-27 15:49:04 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1722245e-41c2-4769-bb36-c920e51c9cd9\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:49:04 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-7qzl2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tm94z-0j6.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7qzl2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-28-25.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:04 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:04 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.28.25,PodIP:,StartTime:2021-10-27 15:49:04 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:49:06.274: INFO: Pod "webserver-deployment-795d758f88-jq7bq" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-jq7bq webserver-deployment-795d758f88- deployment-7285 e11f3153-f402-4c8e-b20f-9e543a563ac9 48385 0 2021-10-27 15:49:05 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 1722245e-41c2-4769-bb36-c920e51c9cd9 0xc004def887 0xc004def888}] [] [{kube-controller-manager Update v1 2021-10-27 15:49:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1722245e-41c2-4769-bb36-c920e51c9cd9\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:49:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-qdqzv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tm94z-0j6.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qdqzv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-28-25.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.28.25,PodIP:,StartTime:2021-10-27 15:49:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:49:06.275: INFO: Pod "webserver-deployment-795d758f88-pvp8p" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-pvp8p webserver-deployment-795d758f88- deployment-7285 f464b290-e12f-4011-aae5-fe5a7f9a5ce8 48390 0 2021-10-27 15:49:05 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 1722245e-41c2-4769-bb36-c920e51c9cd9 0xc004defb77 0xc004defb78}] [] [{kube-controller-manager Update v1 2021-10-27 15:49:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1722245e-41c2-4769-bb36-c920e51c9cd9\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:49:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-bzdf6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tm94z-0j6.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bzdf6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-9-48.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.9.48,PodIP:,StartTime:2021-10-27 15:49:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:49:06.275: INFO: Pod "webserver-deployment-795d758f88-sjs4n" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-sjs4n webserver-deployment-795d758f88- deployment-7285 ac212aa9-0783-466e-92a9-8e7464c0d11c 48388 0 2021-10-27 15:49:05 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 1722245e-41c2-4769-bb36-c920e51c9cd9 0xc00389a000 0xc00389a001}] [] [{kube-controller-manager Update v1 2021-10-27 15:49:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1722245e-41c2-4769-bb36-c920e51c9cd9\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:49:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-mlwgl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tm94z-0j6.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mlwgl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-28-25.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.28.25,PodIP:,StartTime:2021-10-27 15:49:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:49:06.275: INFO: Pod "webserver-deployment-795d758f88-wckqf" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-wckqf webserver-deployment-795d758f88- deployment-7285 4ee59ab6-d87a-479d-b19e-ec3a58e0466e 48387 0 2021-10-27 15:49:05 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 1722245e-41c2-4769-bb36-c920e51c9cd9 0xc00389a1e7 0xc00389a1e8}] [] [{kube-controller-manager Update v1 2021-10-27 15:49:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1722245e-41c2-4769-bb36-c920e51c9cd9\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:49:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-x89hg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tm94z-0j6.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x89hg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-9-48.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.9.48,PodIP:,StartTime:2021-10-27 15:49:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:49:06.275: INFO: Pod "webserver-deployment-795d758f88-wq6m8" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-wq6m8 webserver-deployment-795d758f88- deployment-7285 43209f0b-2d1c-4084-bb0d-df3c1dfe738d 48405 0 2021-10-27 15:49:05 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 1722245e-41c2-4769-bb36-c920e51c9cd9 0xc00389a3d0 0xc00389a3d1}] [] [{kube-controller-manager Update v1 2021-10-27 15:49:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1722245e-41c2-4769-bb36-c920e51c9cd9\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:49:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-zk2xl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tm94z-0j6.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zk2xl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-9-48.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.9.48,PodIP:,StartTime:2021-10-27 15:49:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:49:06.276: INFO: Pod "webserver-deployment-795d758f88-xjkbv" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-xjkbv webserver-deployment-795d758f88- deployment-7285 ffd81f92-c253-4f53-b83a-6ae127542bcd 48324 0 2021-10-27 15:49:04 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 1722245e-41c2-4769-bb36-c920e51c9cd9 0xc00389a5a0 0xc00389a5a1}] [] [{kube-controller-manager Update v1 2021-10-27 15:49:04 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1722245e-41c2-4769-bb36-c920e51c9cd9\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:49:04 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-h5wb5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tm94z-0j6.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h5wb5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-28-25.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:04 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:04 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.28.25,PodIP:,StartTime:2021-10-27 15:49:04 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:49:06.276: INFO: Pod "webserver-deployment-847dcfb7fb-2fnm9" is not available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-2fnm9 webserver-deployment-847dcfb7fb- deployment-7285 1f41eb1d-fc3c-4002-8fbe-6ad511f2498f 48380 0 2021-10-27 15:49:05 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 2d989e7d-3b8f-45c6-8ffb-50e38de8031b 0xc00389a797 0xc00389a798}] [] [{kube-controller-manager Update v1 2021-10-27 15:49:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2d989e7d-3b8f-45c6-8ffb-50e38de8031b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:49:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-zgkpt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tm94z-0j6.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zgkpt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-28-25.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.28.25,PodIP:,StartTime:2021-10-27 15:49:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:49:06.276: INFO: Pod "webserver-deployment-847dcfb7fb-7dkpr" is available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-7dkpr webserver-deployment-847dcfb7fb- deployment-7285 63398b6a-2060-4862-9893-dd1946e3ca54 48267 0 2021-10-27 15:48:59 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/containerID:da2694bcb58b840fda21e1a871a0bbed7b5774807248359bcdcbaa9c7dae2558 cni.projectcalico.org/podIP:100.96.1.154/32 cni.projectcalico.org/podIPs:100.96.1.154/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 2d989e7d-3b8f-45c6-8ffb-50e38de8031b 0xc00389a977 0xc00389a978}] [] [{kube-controller-manager Update v1 2021-10-27 15:48:59 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2d989e7d-3b8f-45c6-8ffb-50e38de8031b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-27 15:49:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-27 15:49:03 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.1.154\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-7g49r,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tm94z-0j6.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7g49r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-28-25.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:48:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:48:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.28.25,PodIP:100.96.1.154,StartTime:2021-10-27 15:48:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-27 15:49:02 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://ebe9cb04c80d74e9cb688b600a8f54f88c1bc706265162d6af4a5b3a93730ccb,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.1.154,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:49:06.276: INFO: Pod "webserver-deployment-847dcfb7fb-8bjvf" is available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-8bjvf webserver-deployment-847dcfb7fb- deployment-7285 98dc293f-3efa-448c-a40c-f6964f294536 48250 0 2021-10-27 15:48:59 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/containerID:f643f85c2d52fd752d280a23d60ac5a1c809f252c0863555f606e9045a2e58aa cni.projectcalico.org/podIP:100.96.0.89/32 cni.projectcalico.org/podIPs:100.96.0.89/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 2d989e7d-3b8f-45c6-8ffb-50e38de8031b 0xc00389aba7 0xc00389aba8}] [] [{kube-controller-manager Update v1 2021-10-27 15:48:59 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2d989e7d-3b8f-45c6-8ffb-50e38de8031b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-27 15:49:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-27 15:49:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.0.89\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-2w2h4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tm94z-0j6.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2w2h4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-9-48.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:48:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:48:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.9.48,PodIP:100.96.0.89,StartTime:2021-10-27 15:48:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-27 15:49:01 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://827a06c0372206e8db7373c0a6fbd2673b0e70459618d00daecacf5db38fa3fb,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.0.89,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:49:06.276: INFO: Pod "webserver-deployment-847dcfb7fb-8fgxx" is not available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-8fgxx webserver-deployment-847dcfb7fb- deployment-7285 f3c2df33-73ec-4efe-bddf-327a6a5c8b61 48382 0 2021-10-27 15:49:05 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 2d989e7d-3b8f-45c6-8ffb-50e38de8031b 0xc00389ada0 0xc00389ada1}] [] [{kube-controller-manager Update v1 2021-10-27 15:49:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2d989e7d-3b8f-45c6-8ffb-50e38de8031b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:49:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-chhkz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tm94z-0j6.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-chhkz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-28-25.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.28.25,PodIP:,StartTime:2021-10-27 15:49:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:49:06.277: INFO: Pod "webserver-deployment-847dcfb7fb-8rk5m" is available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-8rk5m webserver-deployment-847dcfb7fb- deployment-7285 f599d881-6ddc-46dd-b1ce-0ad4f78c0784 48253 0 2021-10-27 15:48:59 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/containerID:2f86f57161b1b11bb384952e5bb0f6d49fc0f55ecf23a73282336ce2dac2d62d cni.projectcalico.org/podIP:100.96.1.153/32 cni.projectcalico.org/podIPs:100.96.1.153/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 2d989e7d-3b8f-45c6-8ffb-50e38de8031b 0xc00389af87 0xc00389af88}] [] [{kube-controller-manager Update v1 2021-10-27 15:48:59 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2d989e7d-3b8f-45c6-8ffb-50e38de8031b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-27 15:49:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-27 15:49:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.1.153\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-dkgt2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tm94z-0j6.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dkgt2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-28-25.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:48:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:48:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.28.25,PodIP:100.96.1.153,StartTime:2021-10-27 15:48:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-27 15:49:01 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://61dccc98b9621ff4c052c57e0bb67b32912227b12d4c1b15e46aa857bb6345d9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.1.153,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:49:06.277: INFO: Pod "webserver-deployment-847dcfb7fb-b6bq8" is not available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-b6bq8 webserver-deployment-847dcfb7fb- deployment-7285 22c82da1-4ce5-4fc4-8c18-67679c701a35 48391 0 2021-10-27 15:49:05 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 2d989e7d-3b8f-45c6-8ffb-50e38de8031b 0xc00389b1a7 0xc00389b1a8}] [] [{kube-controller-manager Update v1 2021-10-27 15:49:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2d989e7d-3b8f-45c6-8ffb-50e38de8031b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:49:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-x2l9c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tm94z-0j6.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x2l9c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-28-25.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.28.25,PodIP:,StartTime:2021-10-27 15:49:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:49:06.277: INFO: Pod "webserver-deployment-847dcfb7fb-c4khn" is available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-c4khn webserver-deployment-847dcfb7fb- deployment-7285 72077f97-44d7-41c7-a745-beb9ce973a2f 48247 0 2021-10-27 15:48:59 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/containerID:8958da7f7f1a4f6dd70c0351c1eebac516ab8f0c3afc427d037ce40a0440ae19 cni.projectcalico.org/podIP:100.96.0.88/32 cni.projectcalico.org/podIPs:100.96.0.88/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 2d989e7d-3b8f-45c6-8ffb-50e38de8031b 0xc00389b3c7 0xc00389b3c8}] [] [{kube-controller-manager Update v1 2021-10-27 15:48:59 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2d989e7d-3b8f-45c6-8ffb-50e38de8031b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-27 15:49:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-27 15:49:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.0.88\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-pbg6n,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tm94z-0j6.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pbg6n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-9-48.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:48:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:48:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.9.48,PodIP:100.96.0.88,StartTime:2021-10-27 15:48:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-27 15:49:01 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://0eb1f7fb007fd65eb26d84f6f6c6f87ee13a8e7b00c72109166c91ecefcad7bb,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.0.88,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:49:06.277: INFO: Pod "webserver-deployment-847dcfb7fb-dbtpd" is available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-dbtpd webserver-deployment-847dcfb7fb- deployment-7285 7630d7f2-ee9b-45da-bc5f-bb35d4ca16c8 48244 0 2021-10-27 15:48:59 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/containerID:854e445333038c96e6f5a8fa148cdb27a2b4fb0b7d09ac5869161bf122713f34 cni.projectcalico.org/podIP:100.96.0.91/32 cni.projectcalico.org/podIPs:100.96.0.91/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 2d989e7d-3b8f-45c6-8ffb-50e38de8031b 0xc00389b600 0xc00389b601}] [] [{kube-controller-manager Update v1 2021-10-27 15:48:59 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2d989e7d-3b8f-45c6-8ffb-50e38de8031b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-27 15:49:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-27 15:49:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.0.91\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-psv8m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tm94z-0j6.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-psv8m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-9-48.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:48:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:48:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.9.48,PodIP:100.96.0.91,StartTime:2021-10-27 15:48:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-27 15:49:01 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://d375cf119e841e8616df46c8c3479d264d174c7496ce6c1089b120e5f04972f5,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.0.91,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:49:06.277: INFO: Pod "webserver-deployment-847dcfb7fb-jdkgv" is not available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-jdkgv webserver-deployment-847dcfb7fb- deployment-7285 c993a467-62d2-4874-bc89-968739ab74dd 48406 0 2021-10-27 15:49:05 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 2d989e7d-3b8f-45c6-8ffb-50e38de8031b 0xc00389b800 0xc00389b801}] [] [{kube-controller-manager Update v1 2021-10-27 15:49:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2d989e7d-3b8f-45c6-8ffb-50e38de8031b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:49:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-45zv2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tm94z-0j6.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-45zv2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-9-48.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.9.48,PodIP:,StartTime:2021-10-27 15:49:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:49:06.278: INFO: Pod "webserver-deployment-847dcfb7fb-n8bwc" is available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-n8bwc webserver-deployment-847dcfb7fb- deployment-7285 341577e8-32c0-4d9a-8156-cada1b7287c9 48264 0 2021-10-27 15:48:59 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/containerID:c21135946c4d568642357d32cbcd637388a487184be2e34538b945995233950e cni.projectcalico.org/podIP:100.96.1.158/32 cni.projectcalico.org/podIPs:100.96.1.158/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 2d989e7d-3b8f-45c6-8ffb-50e38de8031b 0xc00389b9f7 0xc00389b9f8}] [] [{kube-controller-manager Update v1 2021-10-27 15:48:59 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2d989e7d-3b8f-45c6-8ffb-50e38de8031b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-27 15:49:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-27 15:49:03 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.1.158\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-m5z88,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tm94z-0j6.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m5z88,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-28-25.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:48:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:48:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.28.25,PodIP:100.96.1.158,StartTime:2021-10-27 15:48:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-27 15:49:03 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://e4a3cafdd76f6e60af7f61045bbd80d9934d50677ec5b7897950923e931429c4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.1.158,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:49:06.278: INFO: Pod "webserver-deployment-847dcfb7fb-n9xwc" is not available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-n9xwc webserver-deployment-847dcfb7fb- deployment-7285 d2b107c7-a619-4ac3-bbbc-736ca09ac131 48398 0 2021-10-27 15:49:05 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 2d989e7d-3b8f-45c6-8ffb-50e38de8031b 0xc00389bbf7 0xc00389bbf8}] [] [{kube-controller-manager Update v1 2021-10-27 15:49:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2d989e7d-3b8f-45c6-8ffb-50e38de8031b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:49:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-djtr7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tm94z-0j6.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-djtr7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-28-25.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.28.25,PodIP:,StartTime:2021-10-27 15:49:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:49:06.278: INFO: Pod "webserver-deployment-847dcfb7fb-qhp2k" is available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-qhp2k webserver-deployment-847dcfb7fb- deployment-7285 b4a589ee-5c52-4f9c-a954-e54695993667 48270 0 2021-10-27 15:48:59 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/containerID:648de3a075865f092ef42711280a50757d04891e2e8220d090c4482b030ab83d cni.projectcalico.org/podIP:100.96.1.156/32 cni.projectcalico.org/podIPs:100.96.1.156/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 2d989e7d-3b8f-45c6-8ffb-50e38de8031b 0xc00389bde7 0xc00389bde8}] [] [{kube-controller-manager Update v1 2021-10-27 15:48:59 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2d989e7d-3b8f-45c6-8ffb-50e38de8031b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-27 15:49:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-27 15:49:03 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.1.156\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-bg7tq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tm94z-0j6.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bg7tq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-28-25.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:48:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:48:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.28.25,PodIP:100.96.1.156,StartTime:2021-10-27 15:48:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-27 15:49:03 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://bc03409f772105fb40b03c318a02de3f5a03fc2a5806dbc61e89ee477a28655d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.1.156,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:49:06.278: INFO: Pod "webserver-deployment-847dcfb7fb-qtkwl" is not available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-qtkwl webserver-deployment-847dcfb7fb- deployment-7285 9588fa56-887d-4ab2-bf73-d0711d2da838 48358 0 2021-10-27 15:49:05 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 2d989e7d-3b8f-45c6-8ffb-50e38de8031b 0xc00389bfe7 0xc00389bfe8}] [] [{kube-controller-manager Update v1 2021-10-27 15:49:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2d989e7d-3b8f-45c6-8ffb-50e38de8031b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:49:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-6t2rb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tm94z-0j6.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6t2rb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-28-25.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.28.25,PodIP:,StartTime:2021-10-27 15:49:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:49:06.279: INFO: Pod "webserver-deployment-847dcfb7fb-thmjk" is not available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-thmjk webserver-deployment-847dcfb7fb- deployment-7285 e2c11b0f-067e-4b87-b8f3-d3b0674c1361 48403 0 2021-10-27 15:49:05 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 2d989e7d-3b8f-45c6-8ffb-50e38de8031b 0xc0008c6347 0xc0008c6348}] [] [{kube-controller-manager Update v1 2021-10-27 15:49:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2d989e7d-3b8f-45c6-8ffb-50e38de8031b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:49:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-6bnxw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tm94z-0j6.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6bnxw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-9-48.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.9.48,PodIP:,StartTime:2021-10-27 15:49:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:49:06.279: INFO: Pod "webserver-deployment-847dcfb7fb-vfb5c" is not available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-vfb5c webserver-deployment-847dcfb7fb- deployment-7285 6049bbd2-9701-4c6a-919d-089524084de9 48404 0 2021-10-27 15:49:05 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 2d989e7d-3b8f-45c6-8ffb-50e38de8031b 0xc0008c6537 0xc0008c6538}] [] [{kube-controller-manager Update v1 2021-10-27 15:49:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2d989e7d-3b8f-45c6-8ffb-50e38de8031b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:49:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-vcq4s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tm94z-0j6.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vcq4s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-9-48.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.9.48,PodIP:,StartTime:2021-10-27 15:49:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:49:06.279: INFO: Pod "webserver-deployment-847dcfb7fb-vmqdz" is not available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-vmqdz webserver-deployment-847dcfb7fb- deployment-7285 24e54025-5818-46d6-a32d-d3a1f9a6caf1 48365 0 2021-10-27 15:49:05 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 2d989e7d-3b8f-45c6-8ffb-50e38de8031b 0xc0008c67b7 0xc0008c67b8}] [] [{kube-controller-manager Update v1 2021-10-27 15:49:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2d989e7d-3b8f-45c6-8ffb-50e38de8031b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:49:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-zbsgr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tm94z-0j6.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zbsgr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-9-48.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.9.48,PodIP:,StartTime:2021-10-27 15:49:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:49:06.279: INFO: Pod "webserver-deployment-847dcfb7fb-vx8ql" is not available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-vx8ql webserver-deployment-847dcfb7fb- deployment-7285 f51a2c47-6741-4e4d-afb9-1fd84b64cb65 48401 0 2021-10-27 15:49:05 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 2d989e7d-3b8f-45c6-8ffb-50e38de8031b 0xc0008c6af7 0xc0008c6af8}] [] [{kube-controller-manager Update v1 2021-10-27 15:49:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2d989e7d-3b8f-45c6-8ffb-50e38de8031b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:49:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-9v4zn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tm94z-0j6.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9v4zn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-28-25.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.28.25,PodIP:,StartTime:2021-10-27 15:49:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:49:06.280: INFO: Pod "webserver-deployment-847dcfb7fb-w5cqk" is available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-w5cqk webserver-deployment-847dcfb7fb- deployment-7285 e574ad42-2dc9-4033-8f63-c9e8016fd4fb 48241 0 2021-10-27 15:48:59 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/containerID:1c69ad0585cf320533bfe481e63b2afa4af4d83c9098111810db69faa2f88959 cni.projectcalico.org/podIP:100.96.0.90/32 cni.projectcalico.org/podIPs:100.96.0.90/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 2d989e7d-3b8f-45c6-8ffb-50e38de8031b 0xc0008c6d37 0xc0008c6d38}] [] [{kube-controller-manager Update v1 2021-10-27 15:48:59 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2d989e7d-3b8f-45c6-8ffb-50e38de8031b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-27 15:49:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-27 15:49:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.0.90\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-x4ftp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tm94z-0j6.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x4ftp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-9-48.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:48:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:48:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.9.48,PodIP:100.96.0.90,StartTime:2021-10-27 15:48:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-27 15:49:01 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://19950c03af2255e11e4a97871059c07b4633dd26b9355e21c4b87d98ec4ec7a3,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.0.90,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:49:06.280: INFO: Pod "webserver-deployment-847dcfb7fb-wvszv" is not available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-wvszv webserver-deployment-847dcfb7fb- deployment-7285 c477ab01-4707-4958-ab1a-77b655b4bb28 48394 0 2021-10-27 15:49:05 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 2d989e7d-3b8f-45c6-8ffb-50e38de8031b 0xc0008c6f30 0xc0008c6f31}] [] [{kube-controller-manager Update v1 2021-10-27 15:49:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2d989e7d-3b8f-45c6-8ffb-50e38de8031b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:49:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-wg7xp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tm94z-0j6.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wg7xp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-9-48.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.9.48,PodIP:,StartTime:2021-10-27 15:49:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:49:06.280: INFO: Pod "webserver-deployment-847dcfb7fb-zx2rk" is not available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-zx2rk webserver-deployment-847dcfb7fb- deployment-7285 5c0422bf-3f36-44be-9cae-4f9bbc157c7e 48407 0 2021-10-27 15:49:05 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 2d989e7d-3b8f-45c6-8ffb-50e38de8031b 0xc0008c7107 0xc0008c7108}] [] [{kube-controller-manager Update v1 2021-10-27 15:49:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2d989e7d-3b8f-45c6-8ffb-50e38de8031b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:49:06 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-n2qmn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tm94z-0j6.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n2qmn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-9-48.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:49:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.9.48,PodIP:,StartTime:2021-10-27 15:49:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:49:06.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-7285" for this suite. +•{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":346,"completed":337,"skipped":5951,"failed":0} +SSS +------------------------------ +[sig-node] Security Context + should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:49:06.462: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename security-context +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-2253 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser +Oct 27 15:49:07.292: INFO: Waiting up to 5m0s for pod "security-context-e572d664-092d-46eb-aded-93a27e05ccf3" in namespace "security-context-2253" to be "Succeeded or Failed" +Oct 27 15:49:07.383: INFO: Pod "security-context-e572d664-092d-46eb-aded-93a27e05ccf3": Phase="Pending", Reason="", readiness=false. Elapsed: 90.644445ms +Oct 27 15:49:09.473: INFO: Pod "security-context-e572d664-092d-46eb-aded-93a27e05ccf3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.181400193s +Oct 27 15:49:11.564: INFO: Pod "security-context-e572d664-092d-46eb-aded-93a27e05ccf3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.272426436s +Oct 27 15:49:13.655: INFO: Pod "security-context-e572d664-092d-46eb-aded-93a27e05ccf3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.362785018s +Oct 27 15:49:15.747: INFO: Pod "security-context-e572d664-092d-46eb-aded-93a27e05ccf3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.454979798s +Oct 27 15:49:17.838: INFO: Pod "security-context-e572d664-092d-46eb-aded-93a27e05ccf3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.546106646s +STEP: Saw pod success +Oct 27 15:49:17.838: INFO: Pod "security-context-e572d664-092d-46eb-aded-93a27e05ccf3" satisfied condition "Succeeded or Failed" +Oct 27 15:49:17.929: INFO: Trying to get logs from node ip-10-250-9-48.ec2.internal pod security-context-e572d664-092d-46eb-aded-93a27e05ccf3 container test-container: +STEP: delete the pod +Oct 27 15:49:18.121: INFO: Waiting for pod security-context-e572d664-092d-46eb-aded-93a27e05ccf3 to disappear +Oct 27 15:49:18.211: INFO: Pod security-context-e572d664-092d-46eb-aded-93a27e05ccf3 no longer exists +[AfterEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:49:18.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "security-context-2253" for this suite. +•{"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":346,"completed":338,"skipped":5954,"failed":0} +S +------------------------------ +[sig-network] EndpointSliceMirroring + should mirror a custom Endpoints resource through create update and delete [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] EndpointSliceMirroring + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:49:18.482: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename endpointslicemirroring +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in endpointslicemirroring-2640 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] EndpointSliceMirroring + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslicemirroring.go:39 +[It] should mirror a custom Endpoints resource through create update and delete [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: mirroring a new custom Endpoint +STEP: mirroring an update to a custom Endpoint +STEP: mirroring deletion of a custom Endpoint +[AfterEach] [sig-network] EndpointSliceMirroring + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:49:19.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "endpointslicemirroring-2640" for this suite. +•{"msg":"PASSED [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","total":346,"completed":339,"skipped":5955,"failed":0} +SSSSSS +------------------------------ +[sig-apps] Job + should adopt matching orphans and release non-matching pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Job + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:49:20.035: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename job +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in job-2813 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should adopt matching orphans and release non-matching pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a job +STEP: Ensuring active pods == parallelism +STEP: Orphaning one of the Job's Pods +Oct 27 15:49:25.724: INFO: Successfully updated pod "adopt-release--1-f2vqh" +STEP: Checking that the Job readopts the Pod +Oct 27 15:49:25.724: INFO: Waiting up to 15m0s for pod "adopt-release--1-f2vqh" in namespace "job-2813" to be "adopted" +Oct 27 15:49:25.814: INFO: Pod "adopt-release--1-f2vqh": Phase="Running", Reason="", readiness=true. Elapsed: 90.102077ms +Oct 27 15:49:25.814: INFO: Pod "adopt-release--1-f2vqh" satisfied condition "adopted" +STEP: Removing the labels from the Job's Pod +Oct 27 15:49:26.498: INFO: Successfully updated pod "adopt-release--1-f2vqh" +STEP: Checking that the Job releases the Pod +Oct 27 15:49:26.498: INFO: Waiting up to 15m0s for pod "adopt-release--1-f2vqh" in namespace "job-2813" to be "released" +Oct 27 15:49:26.588: INFO: Pod "adopt-release--1-f2vqh": Phase="Running", Reason="", readiness=true. Elapsed: 90.230925ms +Oct 27 15:49:26.589: INFO: Pod "adopt-release--1-f2vqh" satisfied condition "released" +[AfterEach] [sig-apps] Job + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:49:26.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "job-2813" for this suite. +•{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":346,"completed":340,"skipped":5961,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:49:26.859: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-2225 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name configmap-test-volume-map-7ee9beeb-1de7-4aa6-9993-e7987eedeb1c +STEP: Creating a pod to test consume configMaps +Oct 27 15:49:27.777: INFO: Waiting up to 5m0s for pod "pod-configmaps-f23fcb33-4fa6-43b1-8cdf-68ed6ef31e80" in namespace "configmap-2225" to be "Succeeded or Failed" +Oct 27 15:49:27.868: INFO: Pod "pod-configmaps-f23fcb33-4fa6-43b1-8cdf-68ed6ef31e80": Phase="Pending", Reason="", readiness=false. Elapsed: 90.597385ms +Oct 27 15:49:29.959: INFO: Pod "pod-configmaps-f23fcb33-4fa6-43b1-8cdf-68ed6ef31e80": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.181912033s +STEP: Saw pod success +Oct 27 15:49:29.959: INFO: Pod "pod-configmaps-f23fcb33-4fa6-43b1-8cdf-68ed6ef31e80" satisfied condition "Succeeded or Failed" +Oct 27 15:49:30.050: INFO: Trying to get logs from node ip-10-250-28-25.ec2.internal pod pod-configmaps-f23fcb33-4fa6-43b1-8cdf-68ed6ef31e80 container agnhost-container: +STEP: delete the pod +Oct 27 15:49:30.242: INFO: Waiting for pod pod-configmaps-f23fcb33-4fa6-43b1-8cdf-68ed6ef31e80 to disappear +Oct 27 15:49:30.332: INFO: Pod pod-configmaps-f23fcb33-4fa6-43b1-8cdf-68ed6ef31e80 no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:49:30.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-2225" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":346,"completed":341,"skipped":5975,"failed":0} +SSSSSS +------------------------------ +[sig-api-machinery] Servers with support for Table transformation + should return a 406 for a backend which does not implement metadata [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Servers with support for Table transformation + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:49:30.603: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename tables +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in tables-3704 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] Servers with support for Table transformation + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 +[It] should return a 406 for a backend which does not implement metadata [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-api-machinery] Servers with support for Table transformation + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:49:31.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "tables-3704" for this suite. +•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":346,"completed":342,"skipped":5981,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Pods + should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:49:31.698: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename pods +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-9006 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:188 +[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pod +STEP: submitting the pod to kubernetes +Oct 27 15:49:32.616: INFO: The status of Pod pod-update-activedeadlineseconds-b7065325-f2a3-43fe-a4ea-566601feacd1 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:49:34.707: INFO: The status of Pod pod-update-activedeadlineseconds-b7065325-f2a3-43fe-a4ea-566601feacd1 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:49:36.707: INFO: The status of Pod pod-update-activedeadlineseconds-b7065325-f2a3-43fe-a4ea-566601feacd1 is Running (Ready = true) +STEP: verifying the pod is in kubernetes +STEP: updating the pod +Oct 27 15:49:37.574: INFO: Successfully updated pod "pod-update-activedeadlineseconds-b7065325-f2a3-43fe-a4ea-566601feacd1" +Oct 27 15:49:37.574: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-b7065325-f2a3-43fe-a4ea-566601feacd1" in namespace "pods-9006" to be "terminated due to deadline exceeded" +Oct 27 15:49:37.664: INFO: Pod "pod-update-activedeadlineseconds-b7065325-f2a3-43fe-a4ea-566601feacd1": Phase="Failed", Reason="DeadlineExceeded", readiness=true. Elapsed: 90.283379ms +Oct 27 15:49:37.664: INFO: Pod "pod-update-activedeadlineseconds-b7065325-f2a3-43fe-a4ea-566601feacd1" satisfied condition "terminated due to deadline exceeded" +[AfterEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:49:37.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-9006" for this suite. +•{"msg":"PASSED [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":346,"completed":343,"skipped":6010,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Security Context When creating a pod with readOnlyRootFilesystem + should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:49:37.935: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename security-context-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-test-6156 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 +[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:49:38.762: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-1b62feb5-9d01-4b2a-a2de-3e9585be4ec2" in namespace "security-context-test-6156" to be "Succeeded or Failed" +Oct 27 15:49:38.852: INFO: Pod "busybox-readonly-false-1b62feb5-9d01-4b2a-a2de-3e9585be4ec2": Phase="Pending", Reason="", readiness=false. Elapsed: 89.956041ms +Oct 27 15:49:40.943: INFO: Pod "busybox-readonly-false-1b62feb5-9d01-4b2a-a2de-3e9585be4ec2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.180809171s +Oct 27 15:49:40.943: INFO: Pod "busybox-readonly-false-1b62feb5-9d01-4b2a-a2de-3e9585be4ec2" satisfied condition "Succeeded or Failed" +[AfterEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:49:40.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "security-context-test-6156" for this suite. +•{"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":346,"completed":344,"skipped":6045,"failed":0} +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Container Runtime blackbox test on terminated container + should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:49:41.214: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-runtime +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-2698 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the container +STEP: wait for the container to reach Succeeded +STEP: get the container status +STEP: the container should be terminated +STEP: the termination message should be set +Oct 27 15:49:44.412: INFO: Expected: &{OK} to match Container's Termination Message: OK -- +STEP: delete the container +[AfterEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:49:44.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-runtime-2698" for this suite. +•{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":346,"completed":345,"skipped":6062,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-api-machinery] Watchers + should observe add, update, and delete watch notifications on configmaps [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:49:44.868: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename watch +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in watch-1861 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should observe add, update, and delete watch notifications on configmaps [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a watch on configmaps with label A +STEP: creating a watch on configmaps with label B +STEP: creating a watch on configmaps with label A or B +STEP: creating a configmap with label A and ensuring the correct watchers observe the notification +Oct 27 15:49:45.959: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1861 fbe72c8b-2a22-4a75-befe-f96e90b3d361 48885 0 2021-10-27 15:49:45 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-10-27 15:49:45 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 27 15:49:45.959: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1861 fbe72c8b-2a22-4a75-befe-f96e90b3d361 48885 0 2021-10-27 15:49:45 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-10-27 15:49:45 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: modifying configmap A and ensuring the correct watchers observe the notification +Oct 27 15:49:56.142: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1861 fbe72c8b-2a22-4a75-befe-f96e90b3d361 48946 0 2021-10-27 15:49:45 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-10-27 15:49:56 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 27 15:49:56.142: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1861 fbe72c8b-2a22-4a75-befe-f96e90b3d361 48946 0 2021-10-27 15:49:45 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-10-27 15:49:56 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: modifying configmap A again and ensuring the correct watchers observe the notification +Oct 27 15:50:06.326: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1861 fbe72c8b-2a22-4a75-befe-f96e90b3d361 49001 0 2021-10-27 15:49:45 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-10-27 15:49:56 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 27 15:50:06.326: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1861 fbe72c8b-2a22-4a75-befe-f96e90b3d361 49001 0 2021-10-27 15:49:45 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-10-27 15:49:56 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: deleting configmap A and ensuring the correct watchers observe the notification +Oct 27 15:50:16.419: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1861 fbe72c8b-2a22-4a75-befe-f96e90b3d361 49048 0 2021-10-27 15:49:45 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-10-27 15:49:56 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 27 15:50:16.419: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1861 fbe72c8b-2a22-4a75-befe-f96e90b3d361 49048 0 2021-10-27 15:49:45 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-10-27 15:49:56 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: creating a configmap with label B and ensuring the correct watchers observe the notification +Oct 27 15:50:26.514: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-1861 152f0a49-7038-40ab-a44d-9a64318eed14 49092 0 2021-10-27 15:50:26 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-10-27 15:50:26 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 27 15:50:26.515: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-1861 152f0a49-7038-40ab-a44d-9a64318eed14 49092 0 2021-10-27 15:50:26 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-10-27 15:50:26 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: deleting configmap B and ensuring the correct watchers observe the notification +Oct 27 15:50:36.606: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-1861 152f0a49-7038-40ab-a44d-9a64318eed14 49158 0 2021-10-27 15:50:26 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-10-27 15:50:26 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 27 15:50:36.607: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-1861 152f0a49-7038-40ab-a44d-9a64318eed14 49158 0 2021-10-27 15:50:26 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-10-27 15:50:26 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +[AfterEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:50:46.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "watch-1861" for this suite. +•{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":346,"completed":346,"skipped":6072,"failed":0} +SSSSSSSSSSSSSSOct 27 15:50:46.881: INFO: Running AfterSuite actions on all nodes +Oct 27 15:50:46.881: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2 +Oct 27 15:50:46.881: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 +Oct 27 15:50:46.881: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2 +Oct 27 15:50:46.881: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 +Oct 27 15:50:46.881: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 +Oct 27 15:50:46.881: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 +Oct 27 15:50:46.881: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 +Oct 27 15:50:46.881: INFO: Running AfterSuite actions on node 1 +Oct 27 15:50:46.881: INFO: Skipping dumping logs from cluster + +JUnit report was created: /tmp/e2e/artifacts/1635343201/junit_01.xml +{"msg":"Test Suite completed","total":346,"completed":346,"skipped":6086,"failed":0} + +Ran 346 of 6432 Specs in 6642.309 seconds +SUCCESS! -- 346 Passed | 0 Failed | 0 Flaked | 0 Pending | 6086 Skipped +PASS + +Ginkgo ran 1 suite in 1h50m44.433366112s +Test Suite Passed diff --git a/v1.22/gardener-aws/junit_01.xml b/v1.22/gardener-aws/junit_01.xml new file mode 100644 index 0000000000..9903106e97 --- /dev/null +++ b/v1.22/gardener-aws/junit_01.xml @@ -0,0 +1,18607 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/v1.22/gardener-azure/PRODUCT.yaml b/v1.22/gardener-azure/PRODUCT.yaml new file mode 100644 index 0000000000..eb81daf276 --- /dev/null +++ b/v1.22/gardener-azure/PRODUCT.yaml @@ -0,0 +1,9 @@ +vendor: SAP +name: Gardener (https://github.com/gardener/gardener) shoot cluster deployed on AZURE +version: v1.34.0 +website_url: https://gardener.cloud +repo_url: https://github.com/gardener/ +documentation_url: https://github.com/gardener/documentation/wiki +product_logo_url: https://raw.githubusercontent.com/gardener/documentation/master/images/logo_w_saplogo.svg +type: installer +description: The Gardener implements automated management and operation of Kubernetes clusters as a service and aims to support that service on multiple Cloud providers. \ No newline at end of file diff --git a/v1.22/gardener-azure/README.md b/v1.22/gardener-azure/README.md new file mode 100644 index 0000000000..647dbcb2f7 --- /dev/null +++ b/v1.22/gardener-azure/README.md @@ -0,0 +1,80 @@ +# Reproducing the test results: + +## Install Gardener on your Kubernetes Landscape +Check out https://github.com/gardener/garden-setup for a more detailed instruction and additional information. To install Gardener in your base cluster, a command line tool [sow](https://github.com/gardener/sow) is used. Use the provided Docker image that already contains `sow` and all required tools. To execute `sow` you call a [wrapper script](https://github.com/gardener/sow/tree/master/docker/bin) which starts `sow` in a Docker container (Docker will download the image from [eu.gcr.io/gardener-project/sow](http://eu.gcr.io/gardener-project/sow) if it is not available locally yet). Docker executes the sow command with the given arguments, and mounts parts of your file system into that container so that `sow` can read configuration files for the installation of Gardener components, and can persist the state of your installation. After `sow`'s execution Docker removes the container again. + +1. Clone the `sow` repository and add the path to our [wrapper script](https://github.com/gardener/sow/tree/master/docker/bin) to your `PATH` variable so you can call `sow` on the command line. + + ```bash + # setup for calling sow via the wrapper + git clone "https://github.com/gardener/sow" + cd sow + export PATH=$PATH:$PWD/docker/bin + ``` + +2. Create a directory `landscape` for your Gardener landscape and clone this repository into a subdirectory called `crop`: + + ```bash + cd .. + mkdir landscape + cd landscape + git clone "https://github.com/gardener/garden-setup" crop + ``` + +3. If you don't have your `kubekonfig` stored locally somewhere yet, download it. For example, for GKE you would use the following command: + + ```bash + gcloud container clusters get-credentials --zone --project + ``` + +4. Save your `kubeconfig` somewhere in your `landscape` directory. For the remaining steps we will assume that you saved it using file path `landscape/kubeconfig`. + +5. In your `landscape` directory, create a configuration file called `acre.yaml`. The structure of the configuration file is described [below](#configuration-file-acreyaml). Note that the relative file path `./kubeconfig` file must be specified in field `landscape.cluster.kubeconfig` in the configuration file. Checkout [configuration file acre](https://github.com/gardener/garden-setup#configuration-file-acreyaml) for configuration details. + + > Do not use file `acre.yaml` in directory `crop`. This file is used internally by the installation tool. + +6. If you created the base cluster using GKE convert your `kubeconfig` file to one that uses basic authentication with Google-specific configuration parameters: + + ```bash + sow convertkubeconfig + ``` + When asked for credentials, enter the ones that the GKE dashboard shows when clicking on `show credentials`. + + `sow` will replace the file specified in `landscape.cluster.kubeconfig` of your `acre.yaml` file by a kubeconfig file that uses basic authentication. + +7. In your first terminal window, use the following command to check in which order the components will be installed. Nothing will be deployed yet and you can test this way if your syntax in `acre.yaml` is correct: + + ```bash + sow order -A + ``` + +8. If there are no error messages, use the following command to deploy Gardener on your base cluster: + + ```bash + sow deploy -A + ``` + +9. `sow` now starts to install Gardener in your base cluster. The installation can take about 30 minutes. `sow` prints out status messages to the terminal window so that you can check the status of the installation. The other terminal window will show the newly created Kubernetes resources after a while and if their deployment was successful. Wait until the last component is deployed and all created Kubernetes resources are in status `Running`. + +10. Use the following command to find out the URL of the Gardener dashboard. + + ```bash + sow url + ``` + + +## Create Kubernetes Cluster + +Login to SAP Gardener Dashboard to create a Kubernetes Clusters on Amazon Web Services, Microsoft Azure, Google Cloud Platform, Alibaba Cloud, or OpenStack cloud provider. + +## Launch E2E Conformance Tests +Set the `KUBECONFIG` as path to the kubeconfig file of your newly created cluster (you can find the kubeconfig e.g. in the Gardener dashboard). Follow the instructions below to run the Kubernetes e2e conformance tests. Adjust values for arguments `k8sVersion` and `cloudprovider` respective to your new cluster. + +```bash +#first set KUBECONFIG to your cluster +docker run -ti -e --rm -v $KUBECONFIG:/mye2e/shoot.config golang:1.13 bash +# run all commands below within container +go get github.com/gardener/test-infra; cd /go/src/github.com/gardener/test-infra +export GO111MODULE=on; export E2E_EXPORT_PATH=/tmp/export; export KUBECONFIG=/mye2e/shoot.config; export GINKGO_PARALLEL=false +go run -mod=vendor ./integration-tests/e2e --k8sVersion=1.17.1 --cloudprovider=gcp --testcasegroup="conformance" +``` \ No newline at end of file diff --git a/v1.22/gardener-azure/e2e.log b/v1.22/gardener-azure/e2e.log new file mode 100644 index 0000000000..067e1f3ca7 --- /dev/null +++ b/v1.22/gardener-azure/e2e.log @@ -0,0 +1,14209 @@ +Conformance test: not doing test setup. +I1027 14:08:29.420600 5768 e2e.go:129] Starting e2e run "4b4774fa-5bfd-4874-8a0a-18f78a254440" on Ginkgo node 1 +{"msg":"Test Suite starting","total":346,"completed":0,"skipped":0,"failed":0} +Running Suite: Kubernetes e2e suite +=================================== +Random Seed: 1635343709 - Will randomize all specs +Will run 346 of 6432 specs + +Oct 27 14:08:31.748: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 14:08:31.752: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable +Oct 27 14:08:31.816: INFO: Waiting up to 10m0s for all pods (need at least 1) in namespace 'kube-system' to be running and ready +Oct 27 14:08:31.888: INFO: 25 / 25 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) +Oct 27 14:08:31.888: INFO: expected 11 pod replicas in namespace 'kube-system', 11 are Running and Ready. +Oct 27 14:08:31.888: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start +Oct 27 14:08:31.909: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'apiserver-proxy' (0 seconds elapsed) +Oct 27 14:08:31.909: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'calico-node' (0 seconds elapsed) +Oct 27 14:08:31.909: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'csi-driver-node-disk' (0 seconds elapsed) +Oct 27 14:08:31.909: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'csi-driver-node-file' (0 seconds elapsed) +Oct 27 14:08:31.909: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) +Oct 27 14:08:31.909: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-exporter' (0 seconds elapsed) +Oct 27 14:08:31.909: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-problem-detector' (0 seconds elapsed) +Oct 27 14:08:31.909: INFO: e2e test version: v1.22.2 +Oct 27 14:08:31.920: INFO: kube-apiserver version: v1.22.2 +Oct 27 14:08:31.920: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 14:08:31.935: INFO: Cluster IP family: ipv4 +SSS +------------------------------ +[sig-node] NoExecuteTaintManager Multiple Pods [Serial] + evicts pods with minTolerationSeconds [Disruptive] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:08:31.935: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename taint-multiple-pods +W1027 14:08:31.998087 5768 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ +Oct 27 14:08:31.998: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled +Oct 27 14:08:32.023: INFO: PSP annotation exists on dry run pod: "extensions.gardener.cloud.provider-azure.csi-driver-node"; assuming PodSecurityPolicy is enabled +W1027 14:08:32.034297 5768 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ +W1027 14:08:32.046835 5768 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ +Oct 27 14:08:32.065: INFO: Found ClusterRoles; assuming RBAC is enabled. +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in taint-multiple-pods-8379 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/taints.go:345 +Oct 27 14:08:32.220: INFO: Waiting up to 1m0s for all nodes to be ready +Oct 27 14:09:32.320: INFO: Waiting for terminating namespaces to be deleted... +[It] evicts pods with minTolerationSeconds [Disruptive] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:09:32.331: INFO: Starting informer... +STEP: Starting pods... +Oct 27 14:09:32.372: INFO: Pod1 is running on shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2. Tainting Node +Oct 27 14:09:38.455: INFO: Pod2 is running on shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2. Tainting Node +STEP: Trying to apply a taint on the Node +STEP: verifying the node has the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute +STEP: Waiting for Pod1 and Pod2 to be deleted +Oct 27 14:09:44.734: INFO: Noticed Pod "taint-eviction-b1" gets evicted. +Oct 27 14:10:04.797: INFO: Noticed Pod "taint-eviction-b2" gets evicted. +STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute +[AfterEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:10:04.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "taint-multiple-pods-8379" for this suite. +•{"msg":"PASSED [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]","total":346,"completed":1,"skipped":3,"failed":0} +SSSSS +------------------------------ +[sig-apps] Deployment + should run the lifecycle of a Deployment [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:10:04.861: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename deployment +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in deployment-1154 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 +[It] should run the lifecycle of a Deployment [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a Deployment +STEP: waiting for Deployment to be created +STEP: waiting for all Replicas to be Ready +Oct 27 14:10:05.093: INFO: observed Deployment test-deployment in namespace deployment-1154 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Oct 27 14:10:05.093: INFO: observed Deployment test-deployment in namespace deployment-1154 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Oct 27 14:10:05.093: INFO: observed Deployment test-deployment in namespace deployment-1154 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Oct 27 14:10:05.093: INFO: observed Deployment test-deployment in namespace deployment-1154 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Oct 27 14:10:05.094: INFO: observed Deployment test-deployment in namespace deployment-1154 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Oct 27 14:10:05.094: INFO: observed Deployment test-deployment in namespace deployment-1154 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Oct 27 14:10:05.112: INFO: observed Deployment test-deployment in namespace deployment-1154 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Oct 27 14:10:05.112: INFO: observed Deployment test-deployment in namespace deployment-1154 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Oct 27 14:10:14.507: INFO: observed Deployment test-deployment in namespace deployment-1154 with ReadyReplicas 1 and labels map[test-deployment-static:true] +Oct 27 14:10:14.507: INFO: observed Deployment test-deployment in namespace deployment-1154 with ReadyReplicas 1 and labels map[test-deployment-static:true] +Oct 27 14:10:14.839: INFO: observed Deployment test-deployment in namespace deployment-1154 with ReadyReplicas 2 and labels map[test-deployment-static:true] +STEP: patching the Deployment +Oct 27 14:10:14.861: INFO: observed event type ADDED +STEP: waiting for Replicas to scale +Oct 27 14:10:14.871: INFO: observed Deployment test-deployment in namespace deployment-1154 with ReadyReplicas 0 +Oct 27 14:10:14.872: INFO: observed Deployment test-deployment in namespace deployment-1154 with ReadyReplicas 0 +Oct 27 14:10:14.872: INFO: observed Deployment test-deployment in namespace deployment-1154 with ReadyReplicas 0 +Oct 27 14:10:14.872: INFO: observed Deployment test-deployment in namespace deployment-1154 with ReadyReplicas 0 +Oct 27 14:10:14.872: INFO: observed Deployment test-deployment in namespace deployment-1154 with ReadyReplicas 0 +Oct 27 14:10:14.872: INFO: observed Deployment test-deployment in namespace deployment-1154 with ReadyReplicas 0 +Oct 27 14:10:14.872: INFO: observed Deployment test-deployment in namespace deployment-1154 with ReadyReplicas 0 +Oct 27 14:10:14.872: INFO: observed Deployment test-deployment in namespace deployment-1154 with ReadyReplicas 0 +Oct 27 14:10:14.872: INFO: observed Deployment test-deployment in namespace deployment-1154 with ReadyReplicas 1 +Oct 27 14:10:14.872: INFO: observed Deployment test-deployment in namespace deployment-1154 with ReadyReplicas 1 +Oct 27 14:10:14.872: INFO: observed Deployment test-deployment in namespace deployment-1154 with ReadyReplicas 2 +Oct 27 14:10:14.872: INFO: observed Deployment test-deployment in namespace deployment-1154 with ReadyReplicas 2 +Oct 27 14:10:14.879: INFO: observed Deployment test-deployment in namespace deployment-1154 with ReadyReplicas 2 +Oct 27 14:10:14.879: INFO: observed Deployment test-deployment in namespace deployment-1154 with ReadyReplicas 2 +Oct 27 14:10:14.879: INFO: observed Deployment test-deployment in namespace deployment-1154 with ReadyReplicas 2 +Oct 27 14:10:14.879: INFO: observed Deployment test-deployment in namespace deployment-1154 with ReadyReplicas 2 +Oct 27 14:10:14.879: INFO: observed Deployment test-deployment in namespace deployment-1154 with ReadyReplicas 2 +Oct 27 14:10:14.879: INFO: observed Deployment test-deployment in namespace deployment-1154 with ReadyReplicas 2 +Oct 27 14:10:14.883: INFO: observed Deployment test-deployment in namespace deployment-1154 with ReadyReplicas 1 +Oct 27 14:10:14.883: INFO: observed Deployment test-deployment in namespace deployment-1154 with ReadyReplicas 1 +Oct 27 14:10:14.892: INFO: observed Deployment test-deployment in namespace deployment-1154 with ReadyReplicas 1 +Oct 27 14:10:14.892: INFO: observed Deployment test-deployment in namespace deployment-1154 with ReadyReplicas 1 +Oct 27 14:10:16.868: INFO: observed Deployment test-deployment in namespace deployment-1154 with ReadyReplicas 2 +Oct 27 14:10:16.868: INFO: observed Deployment test-deployment in namespace deployment-1154 with ReadyReplicas 2 +Oct 27 14:10:16.882: INFO: observed Deployment test-deployment in namespace deployment-1154 with ReadyReplicas 1 +STEP: listing Deployments +Oct 27 14:10:16.895: INFO: Found test-deployment with labels: map[test-deployment:patched test-deployment-static:true] +STEP: updating the Deployment +Oct 27 14:10:16.924: INFO: observed Deployment test-deployment in namespace deployment-1154 with ReadyReplicas 1 +STEP: fetching the DeploymentStatus +Oct 27 14:10:16.952: INFO: observed Deployment test-deployment in namespace deployment-1154 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] +Oct 27 14:10:16.952: INFO: observed Deployment test-deployment in namespace deployment-1154 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] +Oct 27 14:10:16.952: INFO: observed Deployment test-deployment in namespace deployment-1154 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] +Oct 27 14:10:16.952: INFO: observed Deployment test-deployment in namespace deployment-1154 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] +Oct 27 14:10:16.961: INFO: observed Deployment test-deployment in namespace deployment-1154 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] +Oct 27 14:10:20.537: INFO: observed Deployment test-deployment in namespace deployment-1154 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] +Oct 27 14:10:25.901: INFO: observed Deployment test-deployment in namespace deployment-1154 with ReadyReplicas 3 and labels map[test-deployment:updated test-deployment-static:true] +Oct 27 14:10:25.921: INFO: observed Deployment test-deployment in namespace deployment-1154 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] +Oct 27 14:10:25.925: INFO: observed Deployment test-deployment in namespace deployment-1154 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] +Oct 27 14:10:34.614: INFO: observed Deployment test-deployment in namespace deployment-1154 with ReadyReplicas 3 and labels map[test-deployment:updated test-deployment-static:true] +STEP: patching the DeploymentStatus +STEP: fetching the DeploymentStatus +Oct 27 14:10:34.680: INFO: observed Deployment test-deployment in namespace deployment-1154 with ReadyReplicas 1 +Oct 27 14:10:34.680: INFO: observed Deployment test-deployment in namespace deployment-1154 with ReadyReplicas 1 +Oct 27 14:10:34.680: INFO: observed Deployment test-deployment in namespace deployment-1154 with ReadyReplicas 1 +Oct 27 14:10:34.680: INFO: observed Deployment test-deployment in namespace deployment-1154 with ReadyReplicas 1 +Oct 27 14:10:34.680: INFO: observed Deployment test-deployment in namespace deployment-1154 with ReadyReplicas 1 +Oct 27 14:10:34.680: INFO: observed Deployment test-deployment in namespace deployment-1154 with ReadyReplicas 2 +Oct 27 14:10:34.681: INFO: observed Deployment test-deployment in namespace deployment-1154 with ReadyReplicas 3 +Oct 27 14:10:34.681: INFO: observed Deployment test-deployment in namespace deployment-1154 with ReadyReplicas 2 +Oct 27 14:10:34.681: INFO: observed Deployment test-deployment in namespace deployment-1154 with ReadyReplicas 2 +Oct 27 14:10:34.681: INFO: observed Deployment test-deployment in namespace deployment-1154 with ReadyReplicas 3 +STEP: deleting the Deployment +Oct 27 14:10:34.704: INFO: observed event type MODIFIED +Oct 27 14:10:34.704: INFO: observed event type MODIFIED +Oct 27 14:10:34.707: INFO: observed event type MODIFIED +Oct 27 14:10:34.707: INFO: observed event type MODIFIED +Oct 27 14:10:34.707: INFO: observed event type MODIFIED +Oct 27 14:10:34.707: INFO: observed event type MODIFIED +Oct 27 14:10:34.707: INFO: observed event type MODIFIED +Oct 27 14:10:34.707: INFO: observed event type MODIFIED +Oct 27 14:10:34.707: INFO: observed event type MODIFIED +Oct 27 14:10:34.707: INFO: observed event type MODIFIED +Oct 27 14:10:34.707: INFO: observed event type MODIFIED +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 +Oct 27 14:10:34.718: INFO: Log out all the ReplicaSets if there is no deployment created +Oct 27 14:10:34.731: INFO: ReplicaSet "test-deployment-56c98d85f9": +&ReplicaSet{ObjectMeta:{test-deployment-56c98d85f9 deployment-1154 2a496c8d-9537-477b-acae-4f115704b7d8 6764 4 2021-10-27 14:10:14 +0000 UTC map[pod-template-hash:56c98d85f9 test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-deployment ba00c09e-ee5c-4e15-b86f-59a70e2aa07b 0xc0038ee087 0xc0038ee088}] [] [{kube-controller-manager Update apps/v1 2021-10-27 14:10:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba00c09e-ee5c-4e15-b86f-59a70e2aa07b\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 14:10:34 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 56c98d85f9,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:56c98d85f9 test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/pause:3.5 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0038ee110 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:4,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} + +Oct 27 14:10:34.743: INFO: pod: "test-deployment-56c98d85f9-f7hfp": +&Pod{ObjectMeta:{test-deployment-56c98d85f9-f7hfp test-deployment-56c98d85f9- deployment-1154 3f68b81d-aa8c-4629-ad0d-09ec6c0d3f4d 6762 0 2021-10-27 14:10:16 +0000 UTC 2021-10-27 14:10:35 +0000 UTC 0xc00390e070 map[pod-template-hash:56c98d85f9 test-deployment-static:true] map[cni.projectcalico.org/containerID:38bcdab7ac3a6bc44b3592b454c6a5e917f5199a01c576ce592ff3b10279ccdd cni.projectcalico.org/podIP:100.96.0.18/32 cni.projectcalico.org/podIPs:100.96.0.18/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet test-deployment-56c98d85f9 2a496c8d-9537-477b-acae-4f115704b7d8 0xc00390e0c7 0xc00390e0c8}] [] [{kube-controller-manager Update v1 2021-10-27 14:10:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2a496c8d-9537-477b-acae-4f115704b7d8\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-27 14:10:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-27 14:10:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.0.18\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-vzh7v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:k8s.gcr.io/pause:3.5,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmgxs-skc.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vzh7v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:10:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:10:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:10:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:10:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.5,PodIP:100.96.0.18,StartTime:2021-10-27 14:10:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-27 14:10:20 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/pause:3.5,ImageID:k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07,ContainerID:containerd://12520a10eea53000090d0a7483b533de279c607dbe43c38888e43829773d2f4c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.0.18,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + +Oct 27 14:10:34.743: INFO: ReplicaSet "test-deployment-855f7994f9": +&ReplicaSet{ObjectMeta:{test-deployment-855f7994f9 deployment-1154 f3273971-1354-48d5-8948-f03899e7ca64 6617 3 2021-10-27 14:10:05 +0000 UTC map[pod-template-hash:855f7994f9 test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-deployment ba00c09e-ee5c-4e15-b86f-59a70e2aa07b 0xc0038ee177 0xc0038ee178}] [] [{kube-controller-manager Update apps/v1 2021-10-27 14:10:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba00c09e-ee5c-4e15-b86f-59a70e2aa07b\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 14:10:16 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 855f7994f9,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:855f7994f9 test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0038ee200 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} + +Oct 27 14:10:34.755: INFO: ReplicaSet "test-deployment-d4dfddfbf": +&ReplicaSet{ObjectMeta:{test-deployment-d4dfddfbf deployment-1154 9fba66ea-c351-426f-89b5-64575c098c31 6759 2 2021-10-27 14:10:16 +0000 UTC map[pod-template-hash:d4dfddfbf test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:3] [{apps/v1 Deployment test-deployment ba00c09e-ee5c-4e15-b86f-59a70e2aa07b 0xc0038ee267 0xc0038ee268}] [] [{kube-controller-manager Update apps/v1 2021-10-27 14:10:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba00c09e-ee5c-4e15-b86f-59a70e2aa07b\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 14:10:25 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: d4dfddfbf,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:d4dfddfbf test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0038ee300 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:2,FullyLabeledReplicas:2,ObservedGeneration:2,ReadyReplicas:2,AvailableReplicas:2,Conditions:[]ReplicaSetCondition{},},} + +Oct 27 14:10:34.767: INFO: pod: "test-deployment-d4dfddfbf-chq8m": +&Pod{ObjectMeta:{test-deployment-d4dfddfbf-chq8m test-deployment-d4dfddfbf- deployment-1154 bcd861d7-1575-4e1e-ab7a-5524a9a57884 6692 0 2021-10-27 14:10:16 +0000 UTC map[pod-template-hash:d4dfddfbf test-deployment-static:true] map[cni.projectcalico.org/containerID:7f53b9466ce4a136777b501911635c8ff8812f9acf53faa51fc199d814d8b5c5 cni.projectcalico.org/podIP:100.96.1.11/32 cni.projectcalico.org/podIPs:100.96.1.11/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet test-deployment-d4dfddfbf 9fba66ea-c351-426f-89b5-64575c098c31 0xc00390eee7 0xc00390eee8}] [] [{kube-controller-manager Update v1 2021-10-27 14:10:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9fba66ea-c351-426f-89b5-64575c098c31\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-27 14:10:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-27 14:10:25 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.1.11\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-gvj4j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmgxs-skc.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gvj4j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:10:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:10:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:10:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:10:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.4,PodIP:100.96.1.11,StartTime:2021-10-27 14:10:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-27 14:10:25 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://26f0c1aeb2b571fcb48821d88f5693d6d0062700ba46d6d815a30994bba4ae58,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.1.11,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + +Oct 27 14:10:34.768: INFO: pod: "test-deployment-d4dfddfbf-nkcgh": +&Pod{ObjectMeta:{test-deployment-d4dfddfbf-nkcgh test-deployment-d4dfddfbf- deployment-1154 c32d52a0-3e24-4a48-960b-c7b774f90c83 6758 0 2021-10-27 14:10:25 +0000 UTC map[pod-template-hash:d4dfddfbf test-deployment-static:true] map[cni.projectcalico.org/containerID:521cab4fea562368ce0c390e52ba770f8ac473b4958e1a12e8458dbd949abe82 cni.projectcalico.org/podIP:100.96.0.19/32 cni.projectcalico.org/podIPs:100.96.0.19/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet test-deployment-d4dfddfbf 9fba66ea-c351-426f-89b5-64575c098c31 0xc00390f117 0xc00390f118}] [] [{kube-controller-manager Update v1 2021-10-27 14:10:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9fba66ea-c351-426f-89b5-64575c098c31\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-27 14:10:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-27 14:10:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.0.19\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-26524,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmgxs-skc.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-26524,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:10:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:10:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:10:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:10:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.5,PodIP:100.96.0.19,StartTime:2021-10-27 14:10:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-27 14:10:33 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://c334372e2d7f6ab6817c609c793c956ecf8050c5f27196591aa381ad8f6799a7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.0.19,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:10:34.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-1154" for this suite. +•{"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":346,"completed":2,"skipped":8,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Probing container + should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:10:34.792: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-probe +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-5301 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 +[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating pod liveness-b7601906-87f5-4a72-9bce-c3c21026c802 in namespace container-probe-5301 +Oct 27 14:10:37.027: INFO: Started pod liveness-b7601906-87f5-4a72-9bce-c3c21026c802 in namespace container-probe-5301 +STEP: checking the pod's current state and verifying that restartCount is present +Oct 27 14:10:37.039: INFO: Initial restart count of pod liveness-b7601906-87f5-4a72-9bce-c3c21026c802 is 0 +Oct 27 14:10:57.226: INFO: Restart count of pod container-probe-5301/liveness-b7601906-87f5-4a72-9bce-c3c21026c802 is now 1 (20.187133107s elapsed) +STEP: deleting the pod +[AfterEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:10:57.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-5301" for this suite. +•{"msg":"PASSED [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":346,"completed":3,"skipped":41,"failed":0} +S +------------------------------ +[sig-storage] Secrets + should be immutable if `immutable` field is set [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:10:57.279: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-9422 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be immutable if `immutable` field is set [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:10:57.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-9422" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]","total":346,"completed":4,"skipped":42,"failed":0} +SSSSSSS +------------------------------ +[sig-node] PodTemplates + should run the lifecycle of PodTemplates [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] PodTemplates + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:10:57.608: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename podtemplate +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in podtemplate-6254 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should run the lifecycle of PodTemplates [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-node] PodTemplates + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:10:57.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "podtemplate-6254" for this suite. +•{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":346,"completed":5,"skipped":49,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition + listing custom resource definition objects works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:10:57.915: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename custom-resource-definition +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in custom-resource-definition-8599 +STEP: Waiting for a default service account to be provisioned in namespace +[It] listing custom resource definition objects works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:10:58.097: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:11:01.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "custom-resource-definition-8599" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":346,"completed":6,"skipped":72,"failed":0} +SSSSSS +------------------------------ +[sig-scheduling] SchedulerPredicates [Serial] + validates that NodeSelector is respected if matching [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:11:01.137: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename sched-pred +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-pred-4401 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 +Oct 27 14:11:01.397: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready +Oct 27 14:11:01.423: INFO: Waiting for terminating namespaces to be deleted... +Oct 27 14:11:01.435: INFO: +Logging pods the apiserver thinks is on node shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8 before test +Oct 27 14:11:01.456: INFO: addons-nginx-ingress-controller-76f55b7b5f-ffxv8 from kube-system started at 2021-10-27 14:09:38 +0000 UTC (1 container statuses recorded) +Oct 27 14:11:01.456: INFO: Container nginx-ingress-controller ready: true, restart count 0 +Oct 27 14:11:01.456: INFO: addons-nginx-ingress-nginx-ingress-k8s-backend-56d9d84c8c-w2blg from kube-system started at 2021-10-27 13:56:51 +0000 UTC (1 container statuses recorded) +Oct 27 14:11:01.456: INFO: Container nginx-ingress-nginx-ingress-k8s-backend ready: true, restart count 0 +Oct 27 14:11:01.456: INFO: apiserver-proxy-vdnm2 from kube-system started at 2021-10-27 13:56:14 +0000 UTC (2 container statuses recorded) +Oct 27 14:11:01.456: INFO: Container proxy ready: true, restart count 0 +Oct 27 14:11:01.456: INFO: Container sidecar ready: true, restart count 0 +Oct 27 14:11:01.456: INFO: calico-node-bmkxt from kube-system started at 2021-10-27 14:03:54 +0000 UTC (1 container statuses recorded) +Oct 27 14:11:01.456: INFO: Container calico-node ready: true, restart count 0 +Oct 27 14:11:01.456: INFO: calico-node-vertical-autoscaler-785b5f968-sbxt6 from kube-system started at 2021-10-27 13:56:51 +0000 UTC (1 container statuses recorded) +Oct 27 14:11:01.456: INFO: Container autoscaler ready: true, restart count 0 +Oct 27 14:11:01.456: INFO: calico-typha-deploy-546b97d4b5-kw64w from kube-system started at 2021-10-27 13:56:14 +0000 UTC (1 container statuses recorded) +Oct 27 14:11:01.456: INFO: Container calico-typha ready: true, restart count 0 +Oct 27 14:11:01.456: INFO: calico-typha-horizontal-autoscaler-5b58bb446c-p96rk from kube-system started at 2021-10-27 13:56:51 +0000 UTC (1 container statuses recorded) +Oct 27 14:11:01.456: INFO: Container autoscaler ready: true, restart count 0 +Oct 27 14:11:01.456: INFO: calico-typha-vertical-autoscaler-5c9655cddd-z7tgn from kube-system started at 2021-10-27 13:56:51 +0000 UTC (1 container statuses recorded) +Oct 27 14:11:01.456: INFO: Container autoscaler ready: true, restart count 0 +Oct 27 14:11:01.456: INFO: coredns-7649bdf444-cnjp5 from kube-system started at 2021-10-27 13:56:51 +0000 UTC (1 container statuses recorded) +Oct 27 14:11:01.456: INFO: Container coredns ready: true, restart count 0 +Oct 27 14:11:01.456: INFO: coredns-7649bdf444-x6nkv from kube-system started at 2021-10-27 13:56:51 +0000 UTC (1 container statuses recorded) +Oct 27 14:11:01.456: INFO: Container coredns ready: true, restart count 0 +Oct 27 14:11:01.456: INFO: csi-driver-node-disk-tb5lc from kube-system started at 2021-10-27 13:56:14 +0000 UTC (3 container statuses recorded) +Oct 27 14:11:01.456: INFO: Container csi-driver ready: true, restart count 0 +Oct 27 14:11:01.456: INFO: Container csi-liveness-probe ready: true, restart count 0 +Oct 27 14:11:01.456: INFO: Container csi-node-driver-registrar ready: true, restart count 0 +Oct 27 14:11:01.456: INFO: csi-driver-node-file-8vk78 from kube-system started at 2021-10-27 13:56:14 +0000 UTC (3 container statuses recorded) +Oct 27 14:11:01.456: INFO: Container csi-driver ready: true, restart count 0 +Oct 27 14:11:01.456: INFO: Container csi-liveness-probe ready: true, restart count 0 +Oct 27 14:11:01.456: INFO: Container csi-node-driver-registrar ready: true, restart count 0 +Oct 27 14:11:01.456: INFO: kube-proxy-dtkq4 from kube-system started at 2021-10-27 14:04:47 +0000 UTC (2 container statuses recorded) +Oct 27 14:11:01.456: INFO: Container conntrack-fix ready: true, restart count 0 +Oct 27 14:11:01.456: INFO: Container kube-proxy ready: true, restart count 0 +Oct 27 14:11:01.456: INFO: metrics-server-5555d7587-mw896 from kube-system started at 2021-10-27 13:56:51 +0000 UTC (1 container statuses recorded) +Oct 27 14:11:01.456: INFO: Container metrics-server ready: true, restart count 0 +Oct 27 14:11:01.456: INFO: node-exporter-fg8qw from kube-system started at 2021-10-27 13:56:14 +0000 UTC (1 container statuses recorded) +Oct 27 14:11:01.456: INFO: Container node-exporter ready: true, restart count 0 +Oct 27 14:11:01.456: INFO: node-problem-detector-bxt7r from kube-system started at 2021-10-27 14:07:47 +0000 UTC (1 container statuses recorded) +Oct 27 14:11:01.456: INFO: Container node-problem-detector ready: true, restart count 0 +Oct 27 14:11:01.456: INFO: vpn-shoot-7f6446d489-9kghs from kube-system started at 2021-10-27 13:56:51 +0000 UTC (1 container statuses recorded) +Oct 27 14:11:01.456: INFO: Container vpn-shoot ready: true, restart count 0 +Oct 27 14:11:01.456: INFO: dashboard-metrics-scraper-7ccbfc448f-jcrjk from kubernetes-dashboard started at 2021-10-27 13:56:51 +0000 UTC (1 container statuses recorded) +Oct 27 14:11:01.456: INFO: Container dashboard-metrics-scraper ready: true, restart count 0 +Oct 27 14:11:01.456: INFO: kubernetes-dashboard-65d5f5c55-sf9qc from kubernetes-dashboard started at 2021-10-27 13:56:51 +0000 UTC (1 container statuses recorded) +Oct 27 14:11:01.456: INFO: Container kubernetes-dashboard ready: true, restart count 2 +Oct 27 14:11:01.456: INFO: +Logging pods the apiserver thinks is on node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 before test +Oct 27 14:11:01.481: INFO: apiserver-proxy-8bg6p from kube-system started at 2021-10-27 13:56:32 +0000 UTC (2 container statuses recorded) +Oct 27 14:11:01.481: INFO: Container proxy ready: true, restart count 0 +Oct 27 14:11:01.481: INFO: Container sidecar ready: true, restart count 0 +Oct 27 14:11:01.481: INFO: blackbox-exporter-65c549b94c-vc8rp from kube-system started at 2021-10-27 14:08:45 +0000 UTC (1 container statuses recorded) +Oct 27 14:11:01.481: INFO: Container blackbox-exporter ready: true, restart count 0 +Oct 27 14:11:01.481: INFO: calico-node-v56vf from kube-system started at 2021-10-27 14:03:54 +0000 UTC (1 container statuses recorded) +Oct 27 14:11:01.481: INFO: Container calico-node ready: true, restart count 0 +Oct 27 14:11:01.481: INFO: csi-driver-node-disk-h74nf from kube-system started at 2021-10-27 13:56:32 +0000 UTC (3 container statuses recorded) +Oct 27 14:11:01.481: INFO: Container csi-driver ready: true, restart count 0 +Oct 27 14:11:01.481: INFO: Container csi-liveness-probe ready: true, restart count 0 +Oct 27 14:11:01.481: INFO: Container csi-node-driver-registrar ready: true, restart count 0 +Oct 27 14:11:01.481: INFO: csi-driver-node-file-q9zq2 from kube-system started at 2021-10-27 13:56:32 +0000 UTC (3 container statuses recorded) +Oct 27 14:11:01.481: INFO: Container csi-driver ready: true, restart count 0 +Oct 27 14:11:01.481: INFO: Container csi-liveness-probe ready: true, restart count 0 +Oct 27 14:11:01.481: INFO: Container csi-node-driver-registrar ready: true, restart count 0 +Oct 27 14:11:01.481: INFO: kube-proxy-qrj7x from kube-system started at 2021-10-27 14:04:47 +0000 UTC (2 container statuses recorded) +Oct 27 14:11:01.481: INFO: Container conntrack-fix ready: true, restart count 0 +Oct 27 14:11:01.481: INFO: Container kube-proxy ready: true, restart count 0 +Oct 27 14:11:01.481: INFO: node-exporter-fs6fl from kube-system started at 2021-10-27 13:56:32 +0000 UTC (1 container statuses recorded) +Oct 27 14:11:01.481: INFO: Container node-exporter ready: true, restart count 0 +Oct 27 14:11:01.481: INFO: node-problem-detector-srvcj from kube-system started at 2021-10-27 14:07:47 +0000 UTC (1 container statuses recorded) +Oct 27 14:11:01.481: INFO: Container node-problem-detector ready: true, restart count 0 +[It] validates that NodeSelector is respected if matching [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Trying to launch a pod without a label to get a node which can launch it. +STEP: Explicitly delete pod here to free the resource it takes. +STEP: Trying to apply a random label on the found node. +STEP: verifying the node has the label kubernetes.io/e2e-ef01efdf-872c-496a-8ac2-673f552f966a 42 +STEP: Trying to relaunch the pod, now with labels. +STEP: removing the label kubernetes.io/e2e-ef01efdf-872c-496a-8ac2-673f552f966a off the node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 +STEP: verifying the node doesn't have the label kubernetes.io/e2e-ef01efdf-872c-496a-8ac2-673f552f966a +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:11:09.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-pred-4401" for this suite. +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 +•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":346,"completed":7,"skipped":78,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl server-side dry-run + should check if kubectl can dry-run update Pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:11:09.748: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-8403 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should check if kubectl can dry-run update Pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 +Oct 27 14:11:09.937: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-8403 run e2e-test-httpd-pod --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 --pod-running-timeout=2m0s --labels=run=e2e-test-httpd-pod' +Oct 27 14:11:10.081: INFO: stderr: "" +Oct 27 14:11:10.081: INFO: stdout: "pod/e2e-test-httpd-pod created\n" +STEP: replace the image in the pod with server-side dry-run +Oct 27 14:11:10.081: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-8403 patch pod e2e-test-httpd-pod -p {"spec":{"containers":[{"name": "e2e-test-httpd-pod","image": "k8s.gcr.io/e2e-test-images/busybox:1.29-1"}]}} --dry-run=server' +Oct 27 14:11:10.459: INFO: stderr: "" +Oct 27 14:11:10.460: INFO: stdout: "pod/e2e-test-httpd-pod patched\n" +STEP: verifying the pod e2e-test-httpd-pod has the right image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 +Oct 27 14:11:10.471: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-8403 delete pods e2e-test-httpd-pod' +Oct 27 14:11:13.115: INFO: stderr: "" +Oct 27 14:11:13.115: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:11:13.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-8403" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":346,"completed":8,"skipped":102,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:11:13.235: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-431 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0666 on node default medium +Oct 27 14:11:13.442: INFO: Waiting up to 5m0s for pod "pod-742300f8-3ba4-48f0-a45a-043046f43bd7" in namespace "emptydir-431" to be "Succeeded or Failed" +Oct 27 14:11:13.454: INFO: Pod "pod-742300f8-3ba4-48f0-a45a-043046f43bd7": Phase="Pending", Reason="", readiness=false. Elapsed: 11.779513ms +Oct 27 14:11:15.467: INFO: Pod "pod-742300f8-3ba4-48f0-a45a-043046f43bd7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024590082s +Oct 27 14:11:17.480: INFO: Pod "pod-742300f8-3ba4-48f0-a45a-043046f43bd7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03754667s +STEP: Saw pod success +Oct 27 14:11:17.480: INFO: Pod "pod-742300f8-3ba4-48f0-a45a-043046f43bd7" satisfied condition "Succeeded or Failed" +Oct 27 14:11:17.492: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod pod-742300f8-3ba4-48f0-a45a-043046f43bd7 container test-container: +STEP: delete the pod +Oct 27 14:11:17.604: INFO: Waiting for pod pod-742300f8-3ba4-48f0-a45a-043046f43bd7 to disappear +Oct 27 14:11:17.615: INFO: Pod pod-742300f8-3ba4-48f0-a45a-043046f43bd7 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:11:17.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-431" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":9,"skipped":136,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-instrumentation] Events API + should ensure that an event can be fetched, patched, deleted, and listed [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-instrumentation] Events API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:11:17.653: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename events +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in events-9427 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-instrumentation] Events API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 +[It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a test event +STEP: listing events in all namespaces +STEP: listing events in test namespace +STEP: listing events with field selection filtering on source +STEP: listing events with field selection filtering on reportingController +STEP: getting the test event +STEP: patching the test event +STEP: getting the test event +STEP: updating the test event +STEP: getting the test event +STEP: deleting the test event +STEP: listing events in all namespaces +STEP: listing events in test namespace +[AfterEach] [sig-instrumentation] Events API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:11:18.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "events-9427" for this suite. +•{"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":346,"completed":10,"skipped":175,"failed":0} +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Pods + should run through the lifecycle of Pods and PodStatus [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:11:18.066: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename pods +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-1802 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:188 +[It] should run through the lifecycle of Pods and PodStatus [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a Pod with a static label +STEP: watching for Pod to be ready +Oct 27 14:11:18.304: INFO: observed Pod pod-test in namespace pods-1802 in phase Pending with labels: map[test-pod-static:true] & conditions [] +Oct 27 14:11:18.304: INFO: observed Pod pod-test in namespace pods-1802 in phase Pending with labels: map[test-pod-static:true] & conditions [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 14:11:18 +0000 UTC }] +Oct 27 14:11:18.338: INFO: observed Pod pod-test in namespace pods-1802 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 14:11:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-27 14:11:18 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-27 14:11:18 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 14:11:18 +0000 UTC }] +Oct 27 14:11:18.971: INFO: observed Pod pod-test in namespace pods-1802 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 14:11:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-27 14:11:18 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-27 14:11:18 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 14:11:18 +0000 UTC }] +Oct 27 14:11:20.109: INFO: Found Pod pod-test in namespace pods-1802 in phase Running with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 14:11:18 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 14:11:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 14:11:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 14:11:18 +0000 UTC }] +STEP: patching the Pod with a new Label and updated data +Oct 27 14:11:20.136: INFO: observed event type ADDED +STEP: getting the Pod and ensuring that it's patched +STEP: replacing the Pod's status Ready condition to False +STEP: check the Pod again to ensure its Ready conditions are False +STEP: deleting the Pod via a Collection with a LabelSelector +STEP: watching for the Pod to be deleted +Oct 27 14:11:20.199: INFO: observed event type ADDED +Oct 27 14:11:20.199: INFO: observed event type MODIFIED +Oct 27 14:11:20.200: INFO: observed event type MODIFIED +Oct 27 14:11:20.200: INFO: observed event type MODIFIED +Oct 27 14:11:20.200: INFO: observed event type MODIFIED +Oct 27 14:11:20.200: INFO: observed event type MODIFIED +Oct 27 14:11:20.200: INFO: observed event type MODIFIED +Oct 27 14:11:20.200: INFO: observed event type MODIFIED +Oct 27 14:11:22.116: INFO: observed event type MODIFIED +Oct 27 14:11:22.677: INFO: observed event type MODIFIED +Oct 27 14:11:23.121: INFO: observed event type MODIFIED +Oct 27 14:11:23.141: INFO: observed event type MODIFIED +[AfterEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:11:23.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-1802" for this suite. +•{"msg":"PASSED [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]","total":346,"completed":11,"skipped":193,"failed":0} +SSSS +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:11:23.181: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-5396 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating secret with name secret-test-750882ad-a077-49ba-b6d6-934c123f4ee7 +STEP: Creating a pod to test consume secrets +Oct 27 14:11:23.396: INFO: Waiting up to 5m0s for pod "pod-secrets-d756414f-43a8-4df2-a660-09bfc355b27e" in namespace "secrets-5396" to be "Succeeded or Failed" +Oct 27 14:11:23.407: INFO: Pod "pod-secrets-d756414f-43a8-4df2-a660-09bfc355b27e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.90609ms +Oct 27 14:11:25.420: INFO: Pod "pod-secrets-d756414f-43a8-4df2-a660-09bfc355b27e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024368201s +Oct 27 14:11:27.435: INFO: Pod "pod-secrets-d756414f-43a8-4df2-a660-09bfc355b27e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038658265s +STEP: Saw pod success +Oct 27 14:11:27.435: INFO: Pod "pod-secrets-d756414f-43a8-4df2-a660-09bfc355b27e" satisfied condition "Succeeded or Failed" +Oct 27 14:11:27.446: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod pod-secrets-d756414f-43a8-4df2-a660-09bfc355b27e container secret-volume-test: +STEP: delete the pod +Oct 27 14:11:27.559: INFO: Waiting for pod pod-secrets-d756414f-43a8-4df2-a660-09bfc355b27e to disappear +Oct 27 14:11:27.570: INFO: Pod pod-secrets-d756414f-43a8-4df2-a660-09bfc355b27e no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:11:27.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-5396" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":12,"skipped":197,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:11:27.605: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-6994 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating secret with name projected-secret-test-f48851da-5c8a-4175-91e6-6e677dcb1e3d +STEP: Creating a pod to test consume secrets +Oct 27 14:11:27.819: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9f6b4f27-4517-458a-be2f-27182c500280" in namespace "projected-6994" to be "Succeeded or Failed" +Oct 27 14:11:27.830: INFO: Pod "pod-projected-secrets-9f6b4f27-4517-458a-be2f-27182c500280": Phase="Pending", Reason="", readiness=false. Elapsed: 10.808442ms +Oct 27 14:11:29.843: INFO: Pod "pod-projected-secrets-9f6b4f27-4517-458a-be2f-27182c500280": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024293089s +Oct 27 14:11:31.855: INFO: Pod "pod-projected-secrets-9f6b4f27-4517-458a-be2f-27182c500280": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036012414s +STEP: Saw pod success +Oct 27 14:11:31.855: INFO: Pod "pod-projected-secrets-9f6b4f27-4517-458a-be2f-27182c500280" satisfied condition "Succeeded or Failed" +Oct 27 14:11:31.867: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod pod-projected-secrets-9f6b4f27-4517-458a-be2f-27182c500280 container secret-volume-test: +STEP: delete the pod +Oct 27 14:11:31.942: INFO: Waiting for pod pod-projected-secrets-9f6b4f27-4517-458a-be2f-27182c500280 to disappear +Oct 27 14:11:31.953: INFO: Pod pod-projected-secrets-9f6b4f27-4517-458a-be2f-27182c500280 no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:11:31.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-6994" for this suite. +•{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":346,"completed":13,"skipped":208,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir wrapper volumes + should not conflict [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir wrapper volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:11:31.988: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir-wrapper +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-wrapper-8589 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not conflict [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:11:32.224: INFO: The status of Pod pod-secrets-22eccb83-e51f-4bc5-b11c-508483a2f1bc is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:11:34.236: INFO: The status of Pod pod-secrets-22eccb83-e51f-4bc5-b11c-508483a2f1bc is Running (Ready = true) +STEP: Cleaning up the secret +STEP: Cleaning up the configmap +STEP: Cleaning up the pod +[AfterEach] [sig-storage] EmptyDir wrapper volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:11:34.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-wrapper-8589" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":346,"completed":14,"skipped":261,"failed":0} + +------------------------------ +[sig-network] DNS + should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:11:34.323: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename dns +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-950 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-950.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-950.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-950.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-950.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-950.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-950.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done + +STEP: creating a pod to probe /etc/hosts +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Oct 27 14:11:50.940: INFO: DNS probes using dns-950/dns-test-fe14ccbb-d887-48b5-8528-4c9e194b5e6c succeeded + +STEP: deleting the pod +[AfterEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:11:50.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-950" for this suite. +•{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":346,"completed":15,"skipped":261,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should have session affinity work for NodePort service [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:11:51.000: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-191 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating service in namespace services-191 +STEP: creating service affinity-nodeport in namespace services-191 +STEP: creating replication controller affinity-nodeport in namespace services-191 +I1027 14:11:51.225466 5768 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-191, replica count: 3 +I1027 14:11:54.278067 5768 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I1027 14:11:57.280255 5768 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Oct 27 14:11:57.327: INFO: Creating new exec pod +Oct 27 14:12:00.396: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-191 exec execpod-affinitykc58w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80' +Oct 27 14:12:00.953: INFO: stderr: "+ nc -v -t -w 2 affinity-nodeport 80\n+ echo hostName\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\n" +Oct 27 14:12:00.953: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 14:12:00.953: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-191 exec execpod-affinitykc58w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.69.209.199 80' +Oct 27 14:12:01.459: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 100.69.209.199 80\nConnection to 100.69.209.199 80 port [tcp/http] succeeded!\n" +Oct 27 14:12:01.459: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 14:12:01.459: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-191 exec execpod-affinitykc58w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.250.0.5 31407' +Oct 27 14:12:01.972: INFO: stderr: "+ nc -v -t -w 2 10.250.0.5 31407\n+ echo hostName\nConnection to 10.250.0.5 31407 port [tcp/*] succeeded!\n" +Oct 27 14:12:01.972: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 14:12:01.972: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-191 exec execpod-affinitykc58w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.250.0.4 31407' +Oct 27 14:12:02.500: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.250.0.4 31407\nConnection to 10.250.0.4 31407 port [tcp/*] succeeded!\n" +Oct 27 14:12:02.500: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 14:12:02.500: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-191 exec execpod-affinitykc58w -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.250.0.5:31407/ ; done' +Oct 27 14:12:03.163: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:31407/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:31407/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:31407/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:31407/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:31407/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:31407/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:31407/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:31407/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:31407/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:31407/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:31407/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:31407/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:31407/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:31407/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:31407/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:31407/\n" +Oct 27 14:12:03.163: INFO: stdout: "\naffinity-nodeport-gmfdg\naffinity-nodeport-gmfdg\naffinity-nodeport-gmfdg\naffinity-nodeport-gmfdg\naffinity-nodeport-gmfdg\naffinity-nodeport-gmfdg\naffinity-nodeport-gmfdg\naffinity-nodeport-gmfdg\naffinity-nodeport-gmfdg\naffinity-nodeport-gmfdg\naffinity-nodeport-gmfdg\naffinity-nodeport-gmfdg\naffinity-nodeport-gmfdg\naffinity-nodeport-gmfdg\naffinity-nodeport-gmfdg\naffinity-nodeport-gmfdg" +Oct 27 14:12:03.163: INFO: Received response from host: affinity-nodeport-gmfdg +Oct 27 14:12:03.163: INFO: Received response from host: affinity-nodeport-gmfdg +Oct 27 14:12:03.163: INFO: Received response from host: affinity-nodeport-gmfdg +Oct 27 14:12:03.163: INFO: Received response from host: affinity-nodeport-gmfdg +Oct 27 14:12:03.163: INFO: Received response from host: affinity-nodeport-gmfdg +Oct 27 14:12:03.163: INFO: Received response from host: affinity-nodeport-gmfdg +Oct 27 14:12:03.163: INFO: Received response from host: affinity-nodeport-gmfdg +Oct 27 14:12:03.163: INFO: Received response from host: affinity-nodeport-gmfdg +Oct 27 14:12:03.163: INFO: Received response from host: affinity-nodeport-gmfdg +Oct 27 14:12:03.163: INFO: Received response from host: affinity-nodeport-gmfdg +Oct 27 14:12:03.163: INFO: Received response from host: affinity-nodeport-gmfdg +Oct 27 14:12:03.163: INFO: Received response from host: affinity-nodeport-gmfdg +Oct 27 14:12:03.163: INFO: Received response from host: affinity-nodeport-gmfdg +Oct 27 14:12:03.163: INFO: Received response from host: affinity-nodeport-gmfdg +Oct 27 14:12:03.163: INFO: Received response from host: affinity-nodeport-gmfdg +Oct 27 14:12:03.163: INFO: Received response from host: affinity-nodeport-gmfdg +Oct 27 14:12:03.163: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-nodeport in namespace services-191, will wait for the garbage collector to delete the pods +Oct 27 14:12:03.252: INFO: Deleting ReplicationController affinity-nodeport took: 13.081833ms +Oct 27 14:12:03.452: INFO: Terminating ReplicationController affinity-nodeport pods took: 200.525906ms +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:12:06.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-191" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":346,"completed":16,"skipped":274,"failed":0} +SSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:12:06.617: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-6316 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 27 14:12:06.818: INFO: Waiting up to 5m0s for pod "downwardapi-volume-74ab7e17-dbd7-4e88-b604-8da3364933f9" in namespace "projected-6316" to be "Succeeded or Failed" +Oct 27 14:12:06.829: INFO: Pod "downwardapi-volume-74ab7e17-dbd7-4e88-b604-8da3364933f9": Phase="Pending", Reason="", readiness=false. Elapsed: 11.001142ms +Oct 27 14:12:08.841: INFO: Pod "downwardapi-volume-74ab7e17-dbd7-4e88-b604-8da3364933f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023172528s +Oct 27 14:12:10.854: INFO: Pod "downwardapi-volume-74ab7e17-dbd7-4e88-b604-8da3364933f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036061384s +STEP: Saw pod success +Oct 27 14:12:10.854: INFO: Pod "downwardapi-volume-74ab7e17-dbd7-4e88-b604-8da3364933f9" satisfied condition "Succeeded or Failed" +Oct 27 14:12:10.865: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod downwardapi-volume-74ab7e17-dbd7-4e88-b604-8da3364933f9 container client-container: +STEP: delete the pod +Oct 27 14:12:10.980: INFO: Waiting for pod downwardapi-volume-74ab7e17-dbd7-4e88-b604-8da3364933f9 to disappear +Oct 27 14:12:10.991: INFO: Pod downwardapi-volume-74ab7e17-dbd7-4e88-b604-8da3364933f9 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:12:10.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-6316" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":17,"skipped":279,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Proxy server + should support --unix-socket=/path [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:12:11.028: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-5535 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should support --unix-socket=/path [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Starting the proxy +Oct 27 14:12:11.216: INFO: Asynchronously running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-5535 proxy --unix-socket=/tmp/kubectl-proxy-unix221270120/test' +STEP: retrieving proxy /api/ output +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:12:11.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-5535" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":346,"completed":18,"skipped":294,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Networking Granular Checks: Pods + should function for intra-pod communication: http [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Networking + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:12:11.297: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename pod-network-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pod-network-test-8845 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should function for intra-pod communication: http [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Performing setup for networking test in namespace pod-network-test-8845 +STEP: creating a selector +STEP: Creating the service pods in kubernetes +Oct 27 14:12:11.486: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +Oct 27 14:12:11.558: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:12:13.570: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:12:15.570: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:12:17.571: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:12:19.571: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:12:21.577: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:12:23.569: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:12:25.571: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:12:27.571: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:12:29.570: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:12:31.571: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:12:33.571: INFO: The status of Pod netserver-0 is Running (Ready = true) +Oct 27 14:12:33.594: INFO: The status of Pod netserver-1 is Running (Ready = true) +STEP: Creating test pods +Oct 27 14:12:37.656: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 +Oct 27 14:12:37.656: INFO: Breadth first check of 100.96.0.21 on host 10.250.0.5... +Oct 27 14:12:37.668: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://100.96.1.27:9080/dial?request=hostname&protocol=http&host=100.96.0.21&port=8083&tries=1'] Namespace:pod-network-test-8845 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 14:12:37.668: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 14:12:38.147: INFO: Waiting for responses: map[] +Oct 27 14:12:38.147: INFO: reached 100.96.0.21 after 0/1 tries +Oct 27 14:12:38.147: INFO: Breadth first check of 100.96.1.26 on host 10.250.0.4... +Oct 27 14:12:38.158: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://100.96.1.27:9080/dial?request=hostname&protocol=http&host=100.96.1.26&port=8083&tries=1'] Namespace:pod-network-test-8845 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 14:12:38.158: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 14:12:38.591: INFO: Waiting for responses: map[] +Oct 27 14:12:38.591: INFO: reached 100.96.1.26 after 0/1 tries +Oct 27 14:12:38.591: INFO: Going to retry 0 out of 2 pods.... +[AfterEach] [sig-network] Networking + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:12:38.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pod-network-test-8845" for this suite. +•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":346,"completed":19,"skipped":310,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for multiple CRDs of same group but different versions [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:12:38.626: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-9742 +STEP: Waiting for a default service account to be provisioned in namespace +[It] works for multiple CRDs of same group but different versions [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation +Oct 27 14:12:38.814: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation +Oct 27 14:12:53.763: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 14:12:57.639: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:13:12.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-9742" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":346,"completed":20,"skipped":349,"failed":0} +SSSSSSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + should list, patch and delete a collection of StatefulSets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:13:12.042: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename statefulset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-6750 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:92 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:107 +STEP: Creating service test in namespace statefulset-6750 +[It] should list, patch and delete a collection of StatefulSets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:13:12.269: INFO: Found 0 stateful pods, waiting for 1 +Oct 27 14:13:22.284: INFO: Waiting for pod test-ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: patching the StatefulSet +Oct 27 14:13:22.398: INFO: Waiting for pod test-ss-0 to enter Running - Ready=true, currently Running - Ready=true +Oct 27 14:13:22.398: INFO: Waiting for pod test-ss-1 to enter Running - Ready=true, currently Pending - Ready=false +Oct 27 14:13:32.413: INFO: Waiting for pod test-ss-0 to enter Running - Ready=true, currently Running - Ready=true +Oct 27 14:13:32.413: INFO: Waiting for pod test-ss-1 to enter Running - Ready=true, currently Running - Ready=true +STEP: Listing all StatefulSets +STEP: Delete all of the StatefulSets +STEP: Verify that StatefulSets have been deleted +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118 +Oct 27 14:13:32.472: INFO: Deleting all statefulset in ns statefulset-6750 +[AfterEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:13:32.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-6750" for this suite. +•{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should list, patch and delete a collection of StatefulSets [Conformance]","total":346,"completed":21,"skipped":357,"failed":0} +SSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition + getting/updating/patching custom resource definition status sub-resource works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:13:32.541: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename custom-resource-definition +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in custom-resource-definition-3854 +STEP: Waiting for a default service account to be provisioned in namespace +[It] getting/updating/patching custom resource definition status sub-resource works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:13:32.726: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:13:33.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "custom-resource-definition-3854" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":346,"completed":22,"skipped":366,"failed":0} +SSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:13:33.355: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-1935 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0777 on tmpfs +Oct 27 14:13:33.585: INFO: Waiting up to 5m0s for pod "pod-a20cf378-d692-4bf3-85e7-c66d879962de" in namespace "emptydir-1935" to be "Succeeded or Failed" +Oct 27 14:13:33.597: INFO: Pod "pod-a20cf378-d692-4bf3-85e7-c66d879962de": Phase="Pending", Reason="", readiness=false. Elapsed: 11.371259ms +Oct 27 14:13:35.612: INFO: Pod "pod-a20cf378-d692-4bf3-85e7-c66d879962de": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026736632s +Oct 27 14:13:37.624: INFO: Pod "pod-a20cf378-d692-4bf3-85e7-c66d879962de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0382957s +STEP: Saw pod success +Oct 27 14:13:37.624: INFO: Pod "pod-a20cf378-d692-4bf3-85e7-c66d879962de" satisfied condition "Succeeded or Failed" +Oct 27 14:13:37.634: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod pod-a20cf378-d692-4bf3-85e7-c66d879962de container test-container: +STEP: delete the pod +Oct 27 14:13:37.749: INFO: Waiting for pod pod-a20cf378-d692-4bf3-85e7-c66d879962de to disappear +Oct 27 14:13:37.760: INFO: Pod pod-a20cf378-d692-4bf3-85e7-c66d879962de no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:13:37.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-1935" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":23,"skipped":369,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Service endpoints latency + should not be very high [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Service endpoints latency + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:13:37.793: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename svc-latency +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in svc-latency-191 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not be very high [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:13:37.983: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: creating replication controller svc-latency-rc in namespace svc-latency-191 +I1027 14:13:38.004590 5768 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-191, replica count: 1 +I1027 14:13:39.056204 5768 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I1027 14:13:40.056916 5768 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Oct 27 14:13:40.179: INFO: Created: latency-svc-t9jmv +Oct 27 14:13:40.185: INFO: Got endpoints: latency-svc-t9jmv [28.064628ms] +Oct 27 14:13:40.206: INFO: Created: latency-svc-hzqf9 +Oct 27 14:13:40.209: INFO: Got endpoints: latency-svc-hzqf9 [23.121905ms] +Oct 27 14:13:40.218: INFO: Created: latency-svc-6knds +Oct 27 14:13:40.220: INFO: Got endpoints: latency-svc-6knds [34.814012ms] +Oct 27 14:13:40.226: INFO: Created: latency-svc-nxkst +Oct 27 14:13:40.229: INFO: Got endpoints: latency-svc-nxkst [43.276823ms] +Oct 27 14:13:40.235: INFO: Created: latency-svc-w6g4k +Oct 27 14:13:40.243: INFO: Created: latency-svc-94trf +Oct 27 14:13:40.243: INFO: Got endpoints: latency-svc-w6g4k [57.267534ms] +Oct 27 14:13:40.247: INFO: Got endpoints: latency-svc-94trf [61.338008ms] +Oct 27 14:13:40.251: INFO: Created: latency-svc-5l2gh +Oct 27 14:13:40.255: INFO: Got endpoints: latency-svc-5l2gh [69.702515ms] +Oct 27 14:13:40.259: INFO: Created: latency-svc-hw6f6 +Oct 27 14:13:40.266: INFO: Got endpoints: latency-svc-hw6f6 [80.225275ms] +Oct 27 14:13:40.275: INFO: Created: latency-svc-m4hqd +Oct 27 14:13:40.277: INFO: Got endpoints: latency-svc-m4hqd [91.484828ms] +Oct 27 14:13:40.284: INFO: Created: latency-svc-8k4ph +Oct 27 14:13:40.288: INFO: Got endpoints: latency-svc-8k4ph [101.710613ms] +Oct 27 14:13:40.292: INFO: Created: latency-svc-2x4kg +Oct 27 14:13:40.294: INFO: Got endpoints: latency-svc-2x4kg [108.363557ms] +Oct 27 14:13:40.304: INFO: Created: latency-svc-b5ffn +Oct 27 14:13:40.311: INFO: Got endpoints: latency-svc-b5ffn [124.67609ms] +Oct 27 14:13:40.316: INFO: Created: latency-svc-nzhpm +Oct 27 14:13:40.320: INFO: Got endpoints: latency-svc-nzhpm [134.239611ms] +Oct 27 14:13:40.325: INFO: Created: latency-svc-pn6m6 +Oct 27 14:13:40.332: INFO: Created: latency-svc-sqw98 +Oct 27 14:13:40.332: INFO: Got endpoints: latency-svc-pn6m6 [145.99883ms] +Oct 27 14:13:40.336: INFO: Got endpoints: latency-svc-sqw98 [149.643818ms] +Oct 27 14:13:40.341: INFO: Created: latency-svc-vsxgj +Oct 27 14:13:40.348: INFO: Created: latency-svc-t52d8 +Oct 27 14:13:40.348: INFO: Got endpoints: latency-svc-vsxgj [162.080744ms] +Oct 27 14:13:40.355: INFO: Got endpoints: latency-svc-t52d8 [146.013595ms] +Oct 27 14:13:40.360: INFO: Created: latency-svc-m5nb6 +Oct 27 14:13:40.366: INFO: Created: latency-svc-srth5 +Oct 27 14:13:40.366: INFO: Got endpoints: latency-svc-m5nb6 [145.859097ms] +Oct 27 14:13:40.370: INFO: Got endpoints: latency-svc-srth5 [140.927007ms] +Oct 27 14:13:40.374: INFO: Created: latency-svc-cgtdp +Oct 27 14:13:40.384: INFO: Got endpoints: latency-svc-cgtdp [140.975373ms] +Oct 27 14:13:40.386: INFO: Created: latency-svc-whlq9 +Oct 27 14:13:40.412: INFO: Got endpoints: latency-svc-whlq9 [165.37815ms] +Oct 27 14:13:40.418: INFO: Created: latency-svc-57dcb +Oct 27 14:13:40.421: INFO: Got endpoints: latency-svc-57dcb [165.571563ms] +Oct 27 14:13:40.428: INFO: Created: latency-svc-846vj +Oct 27 14:13:40.436: INFO: Created: latency-svc-j5hc9 +Oct 27 14:13:40.436: INFO: Got endpoints: latency-svc-846vj [169.545432ms] +Oct 27 14:13:40.442: INFO: Got endpoints: latency-svc-j5hc9 [164.696073ms] +Oct 27 14:13:40.443: INFO: Created: latency-svc-c95g5 +Oct 27 14:13:40.446: INFO: Got endpoints: latency-svc-c95g5 [157.926536ms] +Oct 27 14:13:40.454: INFO: Created: latency-svc-h76rb +Oct 27 14:13:40.456: INFO: Got endpoints: latency-svc-h76rb [161.853001ms] +Oct 27 14:13:40.465: INFO: Created: latency-svc-2smfx +Oct 27 14:13:40.476: INFO: Created: latency-svc-nfs5g +Oct 27 14:13:40.476: INFO: Got endpoints: latency-svc-2smfx [165.68929ms] +Oct 27 14:13:40.482: INFO: Got endpoints: latency-svc-nfs5g [162.154243ms] +Oct 27 14:13:40.482: INFO: Created: latency-svc-g9zpc +Oct 27 14:13:40.495: INFO: Got endpoints: latency-svc-g9zpc [162.822226ms] +Oct 27 14:13:40.495: INFO: Created: latency-svc-jbxs9 +Oct 27 14:13:40.512: INFO: Got endpoints: latency-svc-jbxs9 [176.29886ms] +Oct 27 14:13:40.517: INFO: Created: latency-svc-ctsfv +Oct 27 14:13:40.523: INFO: Got endpoints: latency-svc-ctsfv [174.643785ms] +Oct 27 14:13:40.527: INFO: Created: latency-svc-298pv +Oct 27 14:13:40.532: INFO: Got endpoints: latency-svc-298pv [177.11814ms] +Oct 27 14:13:40.536: INFO: Created: latency-svc-j42nr +Oct 27 14:13:40.538: INFO: Got endpoints: latency-svc-j42nr [172.14033ms] +Oct 27 14:13:40.544: INFO: Created: latency-svc-dkqwv +Oct 27 14:13:40.546: INFO: Got endpoints: latency-svc-dkqwv [176.307184ms] +Oct 27 14:13:40.552: INFO: Created: latency-svc-xv6jc +Oct 27 14:13:40.556: INFO: Got endpoints: latency-svc-xv6jc [171.586813ms] +Oct 27 14:13:40.560: INFO: Created: latency-svc-5pgq8 +Oct 27 14:13:40.568: INFO: Created: latency-svc-h5bx8 +Oct 27 14:13:40.570: INFO: Got endpoints: latency-svc-5pgq8 [157.532283ms] +Oct 27 14:13:40.571: INFO: Got endpoints: latency-svc-h5bx8 [150.114798ms] +Oct 27 14:13:40.577: INFO: Created: latency-svc-r8gt7 +Oct 27 14:13:40.584: INFO: Got endpoints: latency-svc-r8gt7 [148.160386ms] +Oct 27 14:13:40.584: INFO: Created: latency-svc-jctfp +Oct 27 14:13:40.592: INFO: Created: latency-svc-4gs2v +Oct 27 14:13:40.611: INFO: Created: latency-svc-ntj2q +Oct 27 14:13:40.618: INFO: Created: latency-svc-z2t8h +Oct 27 14:13:40.625: INFO: Created: latency-svc-bbjkk +Oct 27 14:13:40.633: INFO: Created: latency-svc-mhsz4 +Oct 27 14:13:40.634: INFO: Got endpoints: latency-svc-jctfp [191.894318ms] +Oct 27 14:13:40.641: INFO: Created: latency-svc-xdsgc +Oct 27 14:13:40.652: INFO: Created: latency-svc-9m8tg +Oct 27 14:13:40.660: INFO: Created: latency-svc-pzc58 +Oct 27 14:13:40.669: INFO: Created: latency-svc-mzdkr +Oct 27 14:13:40.677: INFO: Created: latency-svc-7pk6h +Oct 27 14:13:40.685: INFO: Got endpoints: latency-svc-4gs2v [238.849066ms] +Oct 27 14:13:40.685: INFO: Created: latency-svc-gxm4c +Oct 27 14:13:40.693: INFO: Created: latency-svc-2p9wl +Oct 27 14:13:40.710: INFO: Created: latency-svc-5h7vq +Oct 27 14:13:40.717: INFO: Created: latency-svc-fsszs +Oct 27 14:13:40.724: INFO: Created: latency-svc-6nftx +Oct 27 14:13:40.732: INFO: Created: latency-svc-p4zhd +Oct 27 14:13:40.733: INFO: Got endpoints: latency-svc-ntj2q [276.699488ms] +Oct 27 14:13:40.751: INFO: Created: latency-svc-z78r8 +Oct 27 14:13:40.784: INFO: Got endpoints: latency-svc-z2t8h [307.662494ms] +Oct 27 14:13:40.804: INFO: Created: latency-svc-ngppr +Oct 27 14:13:40.833: INFO: Got endpoints: latency-svc-bbjkk [350.855583ms] +Oct 27 14:13:40.853: INFO: Created: latency-svc-rrtsx +Oct 27 14:13:40.883: INFO: Got endpoints: latency-svc-mhsz4 [388.392745ms] +Oct 27 14:13:40.906: INFO: Created: latency-svc-lk7pf +Oct 27 14:13:40.933: INFO: Got endpoints: latency-svc-xdsgc [420.968058ms] +Oct 27 14:13:40.953: INFO: Created: latency-svc-s5cj6 +Oct 27 14:13:40.984: INFO: Got endpoints: latency-svc-9m8tg [460.801563ms] +Oct 27 14:13:41.003: INFO: Created: latency-svc-r7jw7 +Oct 27 14:13:41.034: INFO: Got endpoints: latency-svc-pzc58 [501.836639ms] +Oct 27 14:13:41.057: INFO: Created: latency-svc-9fhrg +Oct 27 14:13:41.084: INFO: Got endpoints: latency-svc-mzdkr [545.197716ms] +Oct 27 14:13:41.104: INFO: Created: latency-svc-9hc9p +Oct 27 14:13:41.133: INFO: Got endpoints: latency-svc-7pk6h [586.557603ms] +Oct 27 14:13:41.156: INFO: Created: latency-svc-4m5sx +Oct 27 14:13:41.184: INFO: Got endpoints: latency-svc-gxm4c [628.78027ms] +Oct 27 14:13:41.205: INFO: Created: latency-svc-c2dj2 +Oct 27 14:13:41.236: INFO: Got endpoints: latency-svc-2p9wl [666.192791ms] +Oct 27 14:13:41.256: INFO: Created: latency-svc-vknm2 +Oct 27 14:13:41.287: INFO: Got endpoints: latency-svc-5h7vq [716.220509ms] +Oct 27 14:13:41.307: INFO: Created: latency-svc-p9t6h +Oct 27 14:13:41.338: INFO: Got endpoints: latency-svc-fsszs [754.446801ms] +Oct 27 14:13:41.420: INFO: Got endpoints: latency-svc-6nftx [785.542177ms] +Oct 27 14:13:41.511: INFO: Created: latency-svc-ssclv +Oct 27 14:13:41.520: INFO: Got endpoints: latency-svc-p4zhd [834.960831ms] +Oct 27 14:13:41.520: INFO: Got endpoints: latency-svc-z78r8 [786.718034ms] +Oct 27 14:13:41.523: INFO: Created: latency-svc-kd8z6 +Oct 27 14:13:41.532: INFO: Got endpoints: latency-svc-ngppr [748.335116ms] +Oct 27 14:13:41.538: INFO: Created: latency-svc-6sndf +Oct 27 14:13:41.545: INFO: Created: latency-svc-wtctv +Oct 27 14:13:41.556: INFO: Created: latency-svc-tfxrv +Oct 27 14:13:41.582: INFO: Got endpoints: latency-svc-rrtsx [749.064794ms] +Oct 27 14:13:41.601: INFO: Created: latency-svc-827v6 +Oct 27 14:13:41.633: INFO: Got endpoints: latency-svc-lk7pf [749.865445ms] +Oct 27 14:13:41.652: INFO: Created: latency-svc-pcsz5 +Oct 27 14:13:41.682: INFO: Got endpoints: latency-svc-s5cj6 [749.243945ms] +Oct 27 14:13:41.701: INFO: Created: latency-svc-884pj +Oct 27 14:13:41.733: INFO: Got endpoints: latency-svc-r7jw7 [749.387934ms] +Oct 27 14:13:41.751: INFO: Created: latency-svc-lpjph +Oct 27 14:13:41.788: INFO: Got endpoints: latency-svc-9fhrg [753.523788ms] +Oct 27 14:13:41.808: INFO: Created: latency-svc-4n5zx +Oct 27 14:13:41.833: INFO: Got endpoints: latency-svc-9hc9p [749.355107ms] +Oct 27 14:13:41.852: INFO: Created: latency-svc-vhfmg +Oct 27 14:13:41.886: INFO: Got endpoints: latency-svc-4m5sx [752.717589ms] +Oct 27 14:13:41.907: INFO: Created: latency-svc-7hdhk +Oct 27 14:13:41.933: INFO: Got endpoints: latency-svc-c2dj2 [748.617804ms] +Oct 27 14:13:41.953: INFO: Created: latency-svc-9zx4x +Oct 27 14:13:41.988: INFO: Got endpoints: latency-svc-vknm2 [751.948627ms] +Oct 27 14:13:42.008: INFO: Created: latency-svc-zdx45 +Oct 27 14:13:42.033: INFO: Got endpoints: latency-svc-p9t6h [745.164358ms] +Oct 27 14:13:42.054: INFO: Created: latency-svc-stt5k +Oct 27 14:13:42.088: INFO: Got endpoints: latency-svc-ssclv [749.976275ms] +Oct 27 14:13:42.113: INFO: Created: latency-svc-6h5n6 +Oct 27 14:13:42.133: INFO: Got endpoints: latency-svc-kd8z6 [713.227283ms] +Oct 27 14:13:42.153: INFO: Created: latency-svc-s2wl2 +Oct 27 14:13:42.187: INFO: Got endpoints: latency-svc-6sndf [667.748893ms] +Oct 27 14:13:42.218: INFO: Created: latency-svc-5qd8s +Oct 27 14:13:42.234: INFO: Got endpoints: latency-svc-wtctv [714.053009ms] +Oct 27 14:13:42.254: INFO: Created: latency-svc-nwqn8 +Oct 27 14:13:42.285: INFO: Got endpoints: latency-svc-tfxrv [752.584195ms] +Oct 27 14:13:42.304: INFO: Created: latency-svc-dpjbc +Oct 27 14:13:42.333: INFO: Got endpoints: latency-svc-827v6 [750.912848ms] +Oct 27 14:13:42.352: INFO: Created: latency-svc-b62sp +Oct 27 14:13:42.383: INFO: Got endpoints: latency-svc-pcsz5 [749.895892ms] +Oct 27 14:13:42.403: INFO: Created: latency-svc-vc58v +Oct 27 14:13:42.435: INFO: Got endpoints: latency-svc-884pj [752.242643ms] +Oct 27 14:13:42.455: INFO: Created: latency-svc-s2rdb +Oct 27 14:13:42.483: INFO: Got endpoints: latency-svc-lpjph [749.864047ms] +Oct 27 14:13:42.502: INFO: Created: latency-svc-xtd4s +Oct 27 14:13:42.538: INFO: Got endpoints: latency-svc-4n5zx [750.077354ms] +Oct 27 14:13:42.559: INFO: Created: latency-svc-4tftv +Oct 27 14:13:42.584: INFO: Got endpoints: latency-svc-vhfmg [750.229092ms] +Oct 27 14:13:42.607: INFO: Created: latency-svc-s62bp +Oct 27 14:13:42.637: INFO: Got endpoints: latency-svc-7hdhk [751.17156ms] +Oct 27 14:13:42.657: INFO: Created: latency-svc-kxqsp +Oct 27 14:13:42.684: INFO: Got endpoints: latency-svc-9zx4x [750.426324ms] +Oct 27 14:13:42.709: INFO: Created: latency-svc-lsj8z +Oct 27 14:13:42.733: INFO: Got endpoints: latency-svc-zdx45 [744.980798ms] +Oct 27 14:13:42.757: INFO: Created: latency-svc-sptbh +Oct 27 14:13:42.788: INFO: Got endpoints: latency-svc-stt5k [755.03705ms] +Oct 27 14:13:42.807: INFO: Created: latency-svc-5pwcs +Oct 27 14:13:42.833: INFO: Got endpoints: latency-svc-6h5n6 [744.725088ms] +Oct 27 14:13:42.858: INFO: Created: latency-svc-vs6cd +Oct 27 14:13:42.882: INFO: Got endpoints: latency-svc-s2wl2 [748.565154ms] +Oct 27 14:13:42.900: INFO: Created: latency-svc-njccq +Oct 27 14:13:42.933: INFO: Got endpoints: latency-svc-5qd8s [745.847783ms] +Oct 27 14:13:42.957: INFO: Created: latency-svc-sbz4w +Oct 27 14:13:42.986: INFO: Got endpoints: latency-svc-nwqn8 [751.790344ms] +Oct 27 14:13:43.008: INFO: Created: latency-svc-72l9c +Oct 27 14:13:43.034: INFO: Got endpoints: latency-svc-dpjbc [748.789849ms] +Oct 27 14:13:43.053: INFO: Created: latency-svc-4vnwc +Oct 27 14:13:43.084: INFO: Got endpoints: latency-svc-b62sp [750.531436ms] +Oct 27 14:13:43.103: INFO: Created: latency-svc-tzvhk +Oct 27 14:13:43.133: INFO: Got endpoints: latency-svc-vc58v [750.411899ms] +Oct 27 14:13:43.153: INFO: Created: latency-svc-zqfs8 +Oct 27 14:13:43.185: INFO: Got endpoints: latency-svc-s2rdb [750.72926ms] +Oct 27 14:13:43.204: INFO: Created: latency-svc-nmdds +Oct 27 14:13:43.238: INFO: Got endpoints: latency-svc-xtd4s [755.094069ms] +Oct 27 14:13:43.263: INFO: Created: latency-svc-vkhth +Oct 27 14:13:43.288: INFO: Got endpoints: latency-svc-4tftv [750.282224ms] +Oct 27 14:13:43.307: INFO: Created: latency-svc-np445 +Oct 27 14:13:43.333: INFO: Got endpoints: latency-svc-s62bp [749.57789ms] +Oct 27 14:13:43.352: INFO: Created: latency-svc-zw7wt +Oct 27 14:13:43.385: INFO: Got endpoints: latency-svc-kxqsp [748.093303ms] +Oct 27 14:13:43.405: INFO: Created: latency-svc-66cl4 +Oct 27 14:13:43.433: INFO: Got endpoints: latency-svc-lsj8z [748.899253ms] +Oct 27 14:13:43.452: INFO: Created: latency-svc-9zdg4 +Oct 27 14:13:43.485: INFO: Got endpoints: latency-svc-sptbh [751.660241ms] +Oct 27 14:13:43.509: INFO: Created: latency-svc-xtml5 +Oct 27 14:13:43.537: INFO: Got endpoints: latency-svc-5pwcs [749.100203ms] +Oct 27 14:13:43.556: INFO: Created: latency-svc-s2t7q +Oct 27 14:13:43.586: INFO: Got endpoints: latency-svc-vs6cd [753.052859ms] +Oct 27 14:13:43.607: INFO: Created: latency-svc-vvqmh +Oct 27 14:13:43.633: INFO: Got endpoints: latency-svc-njccq [751.178296ms] +Oct 27 14:13:43.652: INFO: Created: latency-svc-zlrmz +Oct 27 14:13:43.683: INFO: Got endpoints: latency-svc-sbz4w [749.776815ms] +Oct 27 14:13:43.703: INFO: Created: latency-svc-rnlzk +Oct 27 14:13:43.735: INFO: Got endpoints: latency-svc-72l9c [749.413547ms] +Oct 27 14:13:43.754: INFO: Created: latency-svc-k7qfh +Oct 27 14:13:43.783: INFO: Got endpoints: latency-svc-4vnwc [749.089146ms] +Oct 27 14:13:43.803: INFO: Created: latency-svc-bwl99 +Oct 27 14:13:43.833: INFO: Got endpoints: latency-svc-tzvhk [748.793ms] +Oct 27 14:13:43.851: INFO: Created: latency-svc-9zbcb +Oct 27 14:13:43.887: INFO: Got endpoints: latency-svc-zqfs8 [753.237249ms] +Oct 27 14:13:43.907: INFO: Created: latency-svc-mfnlc +Oct 27 14:13:43.933: INFO: Got endpoints: latency-svc-nmdds [747.967981ms] +Oct 27 14:13:43.953: INFO: Created: latency-svc-2x67p +Oct 27 14:13:43.986: INFO: Got endpoints: latency-svc-vkhth [747.86614ms] +Oct 27 14:13:44.005: INFO: Created: latency-svc-fb6sz +Oct 27 14:13:44.038: INFO: Got endpoints: latency-svc-np445 [749.707638ms] +Oct 27 14:13:44.057: INFO: Created: latency-svc-9mv9x +Oct 27 14:13:44.083: INFO: Got endpoints: latency-svc-zw7wt [749.219231ms] +Oct 27 14:13:44.102: INFO: Created: latency-svc-r2wt5 +Oct 27 14:13:44.132: INFO: Got endpoints: latency-svc-66cl4 [746.943914ms] +Oct 27 14:13:44.151: INFO: Created: latency-svc-kvwp4 +Oct 27 14:13:44.185: INFO: Got endpoints: latency-svc-9zdg4 [752.615759ms] +Oct 27 14:13:44.205: INFO: Created: latency-svc-q4ff9 +Oct 27 14:13:44.234: INFO: Got endpoints: latency-svc-xtml5 [749.291825ms] +Oct 27 14:13:44.259: INFO: Created: latency-svc-gl785 +Oct 27 14:13:44.283: INFO: Got endpoints: latency-svc-s2t7q [746.204162ms] +Oct 27 14:13:44.303: INFO: Created: latency-svc-78c8d +Oct 27 14:13:44.342: INFO: Got endpoints: latency-svc-vvqmh [755.871013ms] +Oct 27 14:13:44.366: INFO: Created: latency-svc-9lswm +Oct 27 14:13:44.383: INFO: Got endpoints: latency-svc-zlrmz [749.854557ms] +Oct 27 14:13:44.403: INFO: Created: latency-svc-2j9p4 +Oct 27 14:13:44.438: INFO: Got endpoints: latency-svc-rnlzk [754.203821ms] +Oct 27 14:13:44.467: INFO: Created: latency-svc-h4fpl +Oct 27 14:13:44.484: INFO: Got endpoints: latency-svc-k7qfh [749.184628ms] +Oct 27 14:13:44.504: INFO: Created: latency-svc-btfz4 +Oct 27 14:13:44.532: INFO: Got endpoints: latency-svc-bwl99 [748.954794ms] +Oct 27 14:13:44.552: INFO: Created: latency-svc-h6tt2 +Oct 27 14:13:44.584: INFO: Got endpoints: latency-svc-9zbcb [750.778788ms] +Oct 27 14:13:44.603: INFO: Created: latency-svc-tvq6t +Oct 27 14:13:44.633: INFO: Got endpoints: latency-svc-mfnlc [746.266063ms] +Oct 27 14:13:44.653: INFO: Created: latency-svc-gwql2 +Oct 27 14:13:44.683: INFO: Got endpoints: latency-svc-2x67p [749.336388ms] +Oct 27 14:13:44.705: INFO: Created: latency-svc-8nmwb +Oct 27 14:13:44.734: INFO: Got endpoints: latency-svc-fb6sz [747.417808ms] +Oct 27 14:13:44.757: INFO: Created: latency-svc-zgszz +Oct 27 14:13:44.789: INFO: Got endpoints: latency-svc-9mv9x [750.721886ms] +Oct 27 14:13:44.808: INFO: Created: latency-svc-v8q6d +Oct 27 14:13:44.834: INFO: Got endpoints: latency-svc-r2wt5 [750.967343ms] +Oct 27 14:13:44.853: INFO: Created: latency-svc-77mfb +Oct 27 14:13:44.883: INFO: Got endpoints: latency-svc-kvwp4 [751.095624ms] +Oct 27 14:13:44.903: INFO: Created: latency-svc-xxc95 +Oct 27 14:13:44.933: INFO: Got endpoints: latency-svc-q4ff9 [747.6647ms] +Oct 27 14:13:44.953: INFO: Created: latency-svc-z92dw +Oct 27 14:13:44.983: INFO: Got endpoints: latency-svc-gl785 [748.474618ms] +Oct 27 14:13:45.004: INFO: Created: latency-svc-kbjmz +Oct 27 14:13:45.033: INFO: Got endpoints: latency-svc-78c8d [749.267464ms] +Oct 27 14:13:45.061: INFO: Created: latency-svc-vnkqt +Oct 27 14:13:45.087: INFO: Got endpoints: latency-svc-9lswm [744.90529ms] +Oct 27 14:13:45.111: INFO: Created: latency-svc-4zqnj +Oct 27 14:13:45.137: INFO: Got endpoints: latency-svc-2j9p4 [754.13377ms] +Oct 27 14:13:45.157: INFO: Created: latency-svc-pgw55 +Oct 27 14:13:45.184: INFO: Got endpoints: latency-svc-h4fpl [746.01036ms] +Oct 27 14:13:45.203: INFO: Created: latency-svc-xxl4c +Oct 27 14:13:45.233: INFO: Got endpoints: latency-svc-btfz4 [748.926626ms] +Oct 27 14:13:45.253: INFO: Created: latency-svc-6p2b9 +Oct 27 14:13:45.284: INFO: Got endpoints: latency-svc-h6tt2 [751.29582ms] +Oct 27 14:13:45.305: INFO: Created: latency-svc-qnm8d +Oct 27 14:13:45.332: INFO: Got endpoints: latency-svc-tvq6t [748.704453ms] +Oct 27 14:13:45.356: INFO: Created: latency-svc-764xn +Oct 27 14:13:45.384: INFO: Got endpoints: latency-svc-gwql2 [750.26948ms] +Oct 27 14:13:45.412: INFO: Created: latency-svc-xjzb4 +Oct 27 14:13:45.436: INFO: Got endpoints: latency-svc-8nmwb [753.13252ms] +Oct 27 14:13:45.461: INFO: Created: latency-svc-jdc7x +Oct 27 14:13:45.483: INFO: Got endpoints: latency-svc-zgszz [749.461343ms] +Oct 27 14:13:45.502: INFO: Created: latency-svc-7nrtj +Oct 27 14:13:45.542: INFO: Got endpoints: latency-svc-v8q6d [752.951601ms] +Oct 27 14:13:45.561: INFO: Created: latency-svc-fjpxf +Oct 27 14:13:45.584: INFO: Got endpoints: latency-svc-77mfb [750.353004ms] +Oct 27 14:13:45.603: INFO: Created: latency-svc-zngfm +Oct 27 14:13:45.634: INFO: Got endpoints: latency-svc-xxc95 [750.109586ms] +Oct 27 14:13:45.659: INFO: Created: latency-svc-lp4vk +Oct 27 14:13:45.684: INFO: Got endpoints: latency-svc-z92dw [750.444741ms] +Oct 27 14:13:45.710: INFO: Created: latency-svc-p68fw +Oct 27 14:13:45.733: INFO: Got endpoints: latency-svc-kbjmz [749.927242ms] +Oct 27 14:13:45.768: INFO: Created: latency-svc-bzs7t +Oct 27 14:13:45.783: INFO: Got endpoints: latency-svc-vnkqt [750.410541ms] +Oct 27 14:13:45.803: INFO: Created: latency-svc-zf6t5 +Oct 27 14:13:45.837: INFO: Got endpoints: latency-svc-4zqnj [749.775307ms] +Oct 27 14:13:45.865: INFO: Created: latency-svc-jkcxm +Oct 27 14:13:45.885: INFO: Got endpoints: latency-svc-pgw55 [748.040998ms] +Oct 27 14:13:45.905: INFO: Created: latency-svc-mvpnj +Oct 27 14:13:45.933: INFO: Got endpoints: latency-svc-xxl4c [749.016512ms] +Oct 27 14:13:45.959: INFO: Created: latency-svc-5gjlq +Oct 27 14:13:45.983: INFO: Got endpoints: latency-svc-6p2b9 [749.662574ms] +Oct 27 14:13:46.007: INFO: Created: latency-svc-rbtzd +Oct 27 14:13:46.040: INFO: Got endpoints: latency-svc-qnm8d [756.068311ms] +Oct 27 14:13:46.063: INFO: Created: latency-svc-h8jjk +Oct 27 14:13:46.083: INFO: Got endpoints: latency-svc-764xn [750.473618ms] +Oct 27 14:13:46.103: INFO: Created: latency-svc-9wllq +Oct 27 14:13:46.134: INFO: Got endpoints: latency-svc-xjzb4 [750.459585ms] +Oct 27 14:13:46.161: INFO: Created: latency-svc-jngmz +Oct 27 14:13:46.186: INFO: Got endpoints: latency-svc-jdc7x [749.553115ms] +Oct 27 14:13:46.206: INFO: Created: latency-svc-jlwnv +Oct 27 14:13:46.234: INFO: Got endpoints: latency-svc-7nrtj [751.31849ms] +Oct 27 14:13:46.254: INFO: Created: latency-svc-kgw8f +Oct 27 14:13:46.285: INFO: Got endpoints: latency-svc-fjpxf [743.458542ms] +Oct 27 14:13:46.305: INFO: Created: latency-svc-lqvpx +Oct 27 14:13:46.333: INFO: Got endpoints: latency-svc-zngfm [748.524331ms] +Oct 27 14:13:46.351: INFO: Created: latency-svc-pwpmd +Oct 27 14:13:46.384: INFO: Got endpoints: latency-svc-lp4vk [750.713991ms] +Oct 27 14:13:46.412: INFO: Created: latency-svc-6q9dr +Oct 27 14:13:46.433: INFO: Got endpoints: latency-svc-p68fw [748.888137ms] +Oct 27 14:13:46.454: INFO: Created: latency-svc-rtv7t +Oct 27 14:13:46.484: INFO: Got endpoints: latency-svc-bzs7t [750.596505ms] +Oct 27 14:13:46.514: INFO: Created: latency-svc-mxrq5 +Oct 27 14:13:46.535: INFO: Got endpoints: latency-svc-zf6t5 [751.250008ms] +Oct 27 14:13:46.554: INFO: Created: latency-svc-d7dq8 +Oct 27 14:13:46.585: INFO: Got endpoints: latency-svc-jkcxm [747.645567ms] +Oct 27 14:13:46.604: INFO: Created: latency-svc-h2rk7 +Oct 27 14:13:46.633: INFO: Got endpoints: latency-svc-mvpnj [747.611274ms] +Oct 27 14:13:46.652: INFO: Created: latency-svc-n2xp6 +Oct 27 14:13:46.683: INFO: Got endpoints: latency-svc-5gjlq [750.115482ms] +Oct 27 14:13:46.703: INFO: Created: latency-svc-glgqp +Oct 27 14:13:46.733: INFO: Got endpoints: latency-svc-rbtzd [749.927975ms] +Oct 27 14:13:46.752: INFO: Created: latency-svc-tjwkq +Oct 27 14:13:46.785: INFO: Got endpoints: latency-svc-h8jjk [745.311114ms] +Oct 27 14:13:46.806: INFO: Created: latency-svc-cjfx7 +Oct 27 14:13:46.838: INFO: Got endpoints: latency-svc-9wllq [755.074399ms] +Oct 27 14:13:46.857: INFO: Created: latency-svc-gvvkp +Oct 27 14:13:46.883: INFO: Got endpoints: latency-svc-jngmz [748.666606ms] +Oct 27 14:13:46.904: INFO: Created: latency-svc-bbjzp +Oct 27 14:13:46.937: INFO: Got endpoints: latency-svc-jlwnv [751.589438ms] +Oct 27 14:13:46.959: INFO: Created: latency-svc-c7454 +Oct 27 14:13:46.983: INFO: Got endpoints: latency-svc-kgw8f [748.383802ms] +Oct 27 14:13:47.003: INFO: Created: latency-svc-jp92r +Oct 27 14:13:47.034: INFO: Got endpoints: latency-svc-lqvpx [748.436714ms] +Oct 27 14:13:47.066: INFO: Created: latency-svc-445rb +Oct 27 14:13:47.084: INFO: Got endpoints: latency-svc-pwpmd [751.096323ms] +Oct 27 14:13:47.104: INFO: Created: latency-svc-jp88d +Oct 27 14:13:47.134: INFO: Got endpoints: latency-svc-6q9dr [749.36057ms] +Oct 27 14:13:47.160: INFO: Created: latency-svc-9tfkb +Oct 27 14:13:47.210: INFO: Got endpoints: latency-svc-rtv7t [777.842248ms] +Oct 27 14:13:47.236: INFO: Created: latency-svc-vkncp +Oct 27 14:13:47.237: INFO: Got endpoints: latency-svc-mxrq5 [752.833162ms] +Oct 27 14:13:47.316: INFO: Created: latency-svc-rnkw6 +Oct 27 14:13:47.412: INFO: Got endpoints: latency-svc-d7dq8 [877.121816ms] +Oct 27 14:13:47.413: INFO: Got endpoints: latency-svc-h2rk7 [828.046486ms] +Oct 27 14:13:47.415: INFO: Got endpoints: latency-svc-n2xp6 [782.240221ms] +Oct 27 14:13:47.432: INFO: Created: latency-svc-jnwl6 +Oct 27 14:13:47.434: INFO: Got endpoints: latency-svc-glgqp [750.544524ms] +Oct 27 14:13:47.443: INFO: Created: latency-svc-j6s7d +Oct 27 14:13:47.452: INFO: Created: latency-svc-sdrvr +Oct 27 14:13:47.459: INFO: Created: latency-svc-z66qb +Oct 27 14:13:47.486: INFO: Got endpoints: latency-svc-tjwkq [752.55244ms] +Oct 27 14:13:47.506: INFO: Created: latency-svc-qxkh7 +Oct 27 14:13:47.534: INFO: Got endpoints: latency-svc-cjfx7 [748.28739ms] +Oct 27 14:13:47.553: INFO: Created: latency-svc-h88ng +Oct 27 14:13:47.588: INFO: Got endpoints: latency-svc-gvvkp [749.747407ms] +Oct 27 14:13:47.608: INFO: Created: latency-svc-v4f98 +Oct 27 14:13:47.636: INFO: Got endpoints: latency-svc-bbjzp [752.911967ms] +Oct 27 14:13:47.665: INFO: Created: latency-svc-b7bzc +Oct 27 14:13:47.683: INFO: Got endpoints: latency-svc-c7454 [745.753017ms] +Oct 27 14:13:47.703: INFO: Created: latency-svc-6lprc +Oct 27 14:13:47.733: INFO: Got endpoints: latency-svc-jp92r [749.915857ms] +Oct 27 14:13:47.752: INFO: Created: latency-svc-nj7sz +Oct 27 14:13:47.784: INFO: Got endpoints: latency-svc-445rb [749.979481ms] +Oct 27 14:13:47.817: INFO: Created: latency-svc-9ptf5 +Oct 27 14:13:47.835: INFO: Got endpoints: latency-svc-jp88d [751.127912ms] +Oct 27 14:13:47.853: INFO: Created: latency-svc-pr8cn +Oct 27 14:13:47.884: INFO: Got endpoints: latency-svc-9tfkb [749.727127ms] +Oct 27 14:13:47.914: INFO: Created: latency-svc-gtm2g +Oct 27 14:13:47.933: INFO: Got endpoints: latency-svc-vkncp [722.076753ms] +Oct 27 14:13:47.953: INFO: Created: latency-svc-wlsvq +Oct 27 14:13:47.983: INFO: Got endpoints: latency-svc-rnkw6 [745.971157ms] +Oct 27 14:13:48.002: INFO: Created: latency-svc-j5jw8 +Oct 27 14:13:48.033: INFO: Got endpoints: latency-svc-jnwl6 [621.236887ms] +Oct 27 14:13:48.083: INFO: Got endpoints: latency-svc-j6s7d [670.042003ms] +Oct 27 14:13:48.133: INFO: Got endpoints: latency-svc-sdrvr [717.696736ms] +Oct 27 14:13:48.183: INFO: Got endpoints: latency-svc-z66qb [749.391015ms] +Oct 27 14:13:48.233: INFO: Got endpoints: latency-svc-qxkh7 [747.575369ms] +Oct 27 14:13:48.283: INFO: Got endpoints: latency-svc-h88ng [749.566992ms] +Oct 27 14:13:48.338: INFO: Got endpoints: latency-svc-v4f98 [749.734847ms] +Oct 27 14:13:48.385: INFO: Got endpoints: latency-svc-b7bzc [749.37712ms] +Oct 27 14:13:48.433: INFO: Got endpoints: latency-svc-6lprc [749.65407ms] +Oct 27 14:13:48.484: INFO: Got endpoints: latency-svc-nj7sz [750.657062ms] +Oct 27 14:13:48.538: INFO: Got endpoints: latency-svc-9ptf5 [753.901607ms] +Oct 27 14:13:48.583: INFO: Got endpoints: latency-svc-pr8cn [748.087935ms] +Oct 27 14:13:48.633: INFO: Got endpoints: latency-svc-gtm2g [749.580096ms] +Oct 27 14:13:48.683: INFO: Got endpoints: latency-svc-wlsvq [750.328531ms] +Oct 27 14:13:48.735: INFO: Got endpoints: latency-svc-j5jw8 [752.344631ms] +Oct 27 14:13:48.735: INFO: Latencies: [23.121905ms 34.814012ms 43.276823ms 57.267534ms 61.338008ms 69.702515ms 80.225275ms 91.484828ms 101.710613ms 108.363557ms 124.67609ms 134.239611ms 140.927007ms 140.975373ms 145.859097ms 145.99883ms 146.013595ms 148.160386ms 149.643818ms 150.114798ms 157.532283ms 157.926536ms 161.853001ms 162.080744ms 162.154243ms 162.822226ms 164.696073ms 165.37815ms 165.571563ms 165.68929ms 169.545432ms 171.586813ms 172.14033ms 174.643785ms 176.29886ms 176.307184ms 177.11814ms 191.894318ms 238.849066ms 276.699488ms 307.662494ms 350.855583ms 388.392745ms 420.968058ms 460.801563ms 501.836639ms 545.197716ms 586.557603ms 621.236887ms 628.78027ms 666.192791ms 667.748893ms 670.042003ms 713.227283ms 714.053009ms 716.220509ms 717.696736ms 722.076753ms 743.458542ms 744.725088ms 744.90529ms 744.980798ms 745.164358ms 745.311114ms 745.753017ms 745.847783ms 745.971157ms 746.01036ms 746.204162ms 746.266063ms 746.943914ms 747.417808ms 747.575369ms 747.611274ms 747.645567ms 747.6647ms 747.86614ms 747.967981ms 748.040998ms 748.087935ms 748.093303ms 748.28739ms 748.335116ms 748.383802ms 748.436714ms 748.474618ms 748.524331ms 748.565154ms 748.617804ms 748.666606ms 748.704453ms 748.789849ms 748.793ms 748.888137ms 748.899253ms 748.926626ms 748.954794ms 749.016512ms 749.064794ms 749.089146ms 749.100203ms 749.184628ms 749.219231ms 749.243945ms 749.267464ms 749.291825ms 749.336388ms 749.355107ms 749.36057ms 749.37712ms 749.387934ms 749.391015ms 749.413547ms 749.461343ms 749.553115ms 749.566992ms 749.57789ms 749.580096ms 749.65407ms 749.662574ms 749.707638ms 749.727127ms 749.734847ms 749.747407ms 749.775307ms 749.776815ms 749.854557ms 749.864047ms 749.865445ms 749.895892ms 749.915857ms 749.927242ms 749.927975ms 749.976275ms 749.979481ms 750.077354ms 750.109586ms 750.115482ms 750.229092ms 750.26948ms 750.282224ms 750.328531ms 750.353004ms 750.410541ms 750.411899ms 750.426324ms 750.444741ms 750.459585ms 750.473618ms 750.531436ms 750.544524ms 750.596505ms 750.657062ms 750.713991ms 750.721886ms 750.72926ms 750.778788ms 750.912848ms 750.967343ms 751.095624ms 751.096323ms 751.127912ms 751.17156ms 751.178296ms 751.250008ms 751.29582ms 751.31849ms 751.589438ms 751.660241ms 751.790344ms 751.948627ms 752.242643ms 752.344631ms 752.55244ms 752.584195ms 752.615759ms 752.717589ms 752.833162ms 752.911967ms 752.951601ms 753.052859ms 753.13252ms 753.237249ms 753.523788ms 753.901607ms 754.13377ms 754.203821ms 754.446801ms 755.03705ms 755.074399ms 755.094069ms 755.871013ms 756.068311ms 777.842248ms 782.240221ms 785.542177ms 786.718034ms 828.046486ms 834.960831ms 877.121816ms] +Oct 27 14:13:48.735: INFO: 50 %ile: 749.100203ms +Oct 27 14:13:48.735: INFO: 90 %ile: 753.052859ms +Oct 27 14:13:48.735: INFO: 99 %ile: 834.960831ms +Oct 27 14:13:48.735: INFO: Total sample count: 200 +[AfterEach] [sig-network] Service endpoints latency + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:13:48.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svc-latency-191" for this suite. +•{"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":346,"completed":24,"skipped":403,"failed":0} +SSS +------------------------------ +[sig-node] Pods + should support retrieving logs from the container over websockets [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:13:48.772: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename pods +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-946 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:188 +[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:13:48.958: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: creating the pod +STEP: submitting the pod to kubernetes +Oct 27 14:13:48.988: INFO: The status of Pod pod-logs-websocket-4b9d922b-fbec-4695-aa16-a18ffe6232d3 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:13:51.002: INFO: The status of Pod pod-logs-websocket-4b9d922b-fbec-4695-aa16-a18ffe6232d3 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:13:53.002: INFO: The status of Pod pod-logs-websocket-4b9d922b-fbec-4695-aa16-a18ffe6232d3 is Running (Ready = true) +[AfterEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:13:53.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-946" for this suite. +•{"msg":"PASSED [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":346,"completed":25,"skipped":406,"failed":0} +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should be able to update and delete ResourceQuota. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:13:53.122: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename resourcequota +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-9012 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to update and delete ResourceQuota. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a ResourceQuota +STEP: Getting a ResourceQuota +STEP: Updating a ResourceQuota +STEP: Verifying a ResourceQuota was modified +STEP: Deleting a ResourceQuota +STEP: Verifying the deleted ResourceQuota +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:13:53.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-9012" for this suite. +•{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":346,"completed":26,"skipped":423,"failed":0} +SSSSSSSSS +------------------------------ +[sig-storage] Secrets + should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:13:53.407: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-9117 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating secret with name secret-test-466aac67-858c-4f97-8c63-627e5d6e5faf +STEP: Creating a pod to test consume secrets +Oct 27 14:13:53.628: INFO: Waiting up to 5m0s for pod "pod-secrets-eaac75c7-922d-40f1-9a61-26099457f5e5" in namespace "secrets-9117" to be "Succeeded or Failed" +Oct 27 14:13:53.640: INFO: Pod "pod-secrets-eaac75c7-922d-40f1-9a61-26099457f5e5": Phase="Pending", Reason="", readiness=false. Elapsed: 11.280409ms +Oct 27 14:13:55.714: INFO: Pod "pod-secrets-eaac75c7-922d-40f1-9a61-26099457f5e5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085679409s +Oct 27 14:13:57.726: INFO: Pod "pod-secrets-eaac75c7-922d-40f1-9a61-26099457f5e5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.097934638s +STEP: Saw pod success +Oct 27 14:13:57.726: INFO: Pod "pod-secrets-eaac75c7-922d-40f1-9a61-26099457f5e5" satisfied condition "Succeeded or Failed" +Oct 27 14:13:57.737: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod pod-secrets-eaac75c7-922d-40f1-9a61-26099457f5e5 container secret-volume-test: +STEP: delete the pod +Oct 27 14:13:57.836: INFO: Waiting for pod pod-secrets-eaac75c7-922d-40f1-9a61-26099457f5e5 to disappear +Oct 27 14:13:57.846: INFO: Pod pod-secrets-eaac75c7-922d-40f1-9a61-26099457f5e5 no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:13:57.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-9117" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":346,"completed":27,"skipped":432,"failed":0} +SSSSSS +------------------------------ +[sig-apps] Job + should adopt matching orphans and release non-matching pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Job + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:13:57.916: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename job +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in job-4231 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should adopt matching orphans and release non-matching pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a job +STEP: Ensuring active pods == parallelism +STEP: Orphaning one of the Job's Pods +Oct 27 14:14:02.668: INFO: Successfully updated pod "adopt-release--1-b7z92" +STEP: Checking that the Job readopts the Pod +Oct 27 14:14:02.669: INFO: Waiting up to 15m0s for pod "adopt-release--1-b7z92" in namespace "job-4231" to be "adopted" +Oct 27 14:14:02.680: INFO: Pod "adopt-release--1-b7z92": Phase="Running", Reason="", readiness=true. Elapsed: 11.030339ms +Oct 27 14:14:04.693: INFO: Pod "adopt-release--1-b7z92": Phase="Running", Reason="", readiness=true. Elapsed: 2.024424346s +Oct 27 14:14:04.693: INFO: Pod "adopt-release--1-b7z92" satisfied condition "adopted" +STEP: Removing the labels from the Job's Pod +Oct 27 14:14:05.220: INFO: Successfully updated pod "adopt-release--1-b7z92" +STEP: Checking that the Job releases the Pod +Oct 27 14:14:05.220: INFO: Waiting up to 15m0s for pod "adopt-release--1-b7z92" in namespace "job-4231" to be "released" +Oct 27 14:14:05.231: INFO: Pod "adopt-release--1-b7z92": Phase="Running", Reason="", readiness=true. Elapsed: 10.716712ms +Oct 27 14:14:05.231: INFO: Pod "adopt-release--1-b7z92" satisfied condition "released" +[AfterEach] [sig-apps] Job + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:14:05.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "job-4231" for this suite. +•{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":346,"completed":28,"skipped":438,"failed":0} +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition + creating/deleting custom resource definition objects works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:14:05.263: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename custom-resource-definition +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in custom-resource-definition-5245 +STEP: Waiting for a default service account to be provisioned in namespace +[It] creating/deleting custom resource definition objects works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:14:05.457: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:14:06.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "custom-resource-definition-5245" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":346,"completed":29,"skipped":456,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should be able to deny attaching pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:14:06.045: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-3255 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 27 14:14:06.929: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940846, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940846, loc:(*time.Location)(0xa09bc80)}}, Reason:"NewReplicaSetCreated", Message:"Created new replica set \"sample-webhook-deployment-78988fc6cd\""}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940846, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940846, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} +Oct 27 14:14:08.942: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940846, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940846, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940846, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940846, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 27 14:14:11.963: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should be able to deny attaching pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Registering the webhook via the AdmissionRegistration API +STEP: create a pod +STEP: 'kubectl attach' the pod, should be denied by the webhook +Oct 27 14:14:16.186: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=webhook-3255 attach --namespace=webhook-3255 to-be-attached-pod -i -c=container1' +Oct 27 14:14:16.477: INFO: rc: 1 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:14:16.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-3255" for this suite. +STEP: Destroying namespace "webhook-3255-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":346,"completed":30,"skipped":471,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide container's cpu request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:14:16.603: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-3191 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should provide container's cpu request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 27 14:14:16.823: INFO: Waiting up to 5m0s for pod "downwardapi-volume-59e27a9a-1722-49c9-95a3-869999cc5db9" in namespace "downward-api-3191" to be "Succeeded or Failed" +Oct 27 14:14:16.835: INFO: Pod "downwardapi-volume-59e27a9a-1722-49c9-95a3-869999cc5db9": Phase="Pending", Reason="", readiness=false. Elapsed: 12.553319ms +Oct 27 14:14:18.847: INFO: Pod "downwardapi-volume-59e27a9a-1722-49c9-95a3-869999cc5db9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.024432902s +STEP: Saw pod success +Oct 27 14:14:18.847: INFO: Pod "downwardapi-volume-59e27a9a-1722-49c9-95a3-869999cc5db9" satisfied condition "Succeeded or Failed" +Oct 27 14:14:18.859: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod downwardapi-volume-59e27a9a-1722-49c9-95a3-869999cc5db9 container client-container: +STEP: delete the pod +Oct 27 14:14:18.936: INFO: Waiting for pod downwardapi-volume-59e27a9a-1722-49c9-95a3-869999cc5db9 to disappear +Oct 27 14:14:18.947: INFO: Pod downwardapi-volume-59e27a9a-1722-49c9-95a3-869999cc5db9 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:14:18.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-3191" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":346,"completed":31,"skipped":486,"failed":0} +SSSS +------------------------------ +[sig-network] DNS + should provide DNS for services [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:14:18.980: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename dns +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-3635 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide DNS for services [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a test headless service +STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3635.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3635.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3635.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3635.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3635.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-3635.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3635.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-3635.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3635.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-3635.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3635.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-3635.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3635.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 241.128.71.100.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/100.71.128.241_udp@PTR;check="$$(dig +tcp +noall +answer +search 241.128.71.100.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/100.71.128.241_tcp@PTR;sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3635.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3635.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3635.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3635.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3635.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-3635.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3635.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-3635.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3635.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-3635.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3635.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-3635.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3635.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 241.128.71.100.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/100.71.128.241_udp@PTR;check="$$(dig +tcp +noall +answer +search 241.128.71.100.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/100.71.128.241_tcp@PTR;sleep 1; done + +STEP: creating a pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Oct 27 14:14:23.374: INFO: Unable to read wheezy_udp@dns-test-service.dns-3635.svc.cluster.local from pod dns-3635/dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94: the server could not find the requested resource (get pods dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94) +Oct 27 14:14:23.501: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3635.svc.cluster.local from pod dns-3635/dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94: the server could not find the requested resource (get pods dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94) +Oct 27 14:14:23.550: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3635.svc.cluster.local from pod dns-3635/dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94: the server could not find the requested resource (get pods dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94) +Oct 27 14:14:23.599: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3635.svc.cluster.local from pod dns-3635/dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94: the server could not find the requested resource (get pods dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94) +Oct 27 14:14:23.820: INFO: Unable to read jessie_udp@dns-test-service.dns-3635.svc.cluster.local from pod dns-3635/dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94: the server could not find the requested resource (get pods dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94) +Oct 27 14:14:23.851: INFO: Unable to read jessie_tcp@dns-test-service.dns-3635.svc.cluster.local from pod dns-3635/dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94: the server could not find the requested resource (get pods dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94) +Oct 27 14:14:23.883: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3635.svc.cluster.local from pod dns-3635/dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94: the server could not find the requested resource (get pods dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94) +Oct 27 14:14:23.914: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3635.svc.cluster.local from pod dns-3635/dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94: the server could not find the requested resource (get pods dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94) +Oct 27 14:14:24.177: INFO: Lookups using dns-3635/dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94 failed for: [wheezy_udp@dns-test-service.dns-3635.svc.cluster.local wheezy_tcp@dns-test-service.dns-3635.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3635.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3635.svc.cluster.local jessie_udp@dns-test-service.dns-3635.svc.cluster.local jessie_tcp@dns-test-service.dns-3635.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3635.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3635.svc.cluster.local] + +Oct 27 14:14:29.218: INFO: Unable to read wheezy_udp@dns-test-service.dns-3635.svc.cluster.local from pod dns-3635/dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94: the server could not find the requested resource (get pods dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94) +Oct 27 14:14:29.248: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3635.svc.cluster.local from pod dns-3635/dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94: the server could not find the requested resource (get pods dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94) +Oct 27 14:14:29.310: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3635.svc.cluster.local from pod dns-3635/dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94: the server could not find the requested resource (get pods dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94) +Oct 27 14:14:29.342: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3635.svc.cluster.local from pod dns-3635/dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94: the server could not find the requested resource (get pods dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94) +Oct 27 14:14:29.570: INFO: Unable to read jessie_udp@dns-test-service.dns-3635.svc.cluster.local from pod dns-3635/dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94: the server could not find the requested resource (get pods dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94) +Oct 27 14:14:29.601: INFO: Unable to read jessie_tcp@dns-test-service.dns-3635.svc.cluster.local from pod dns-3635/dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94: the server could not find the requested resource (get pods dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94) +Oct 27 14:14:29.636: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3635.svc.cluster.local from pod dns-3635/dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94: the server could not find the requested resource (get pods dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94) +Oct 27 14:14:29.667: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3635.svc.cluster.local from pod dns-3635/dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94: the server could not find the requested resource (get pods dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94) +Oct 27 14:14:29.860: INFO: Lookups using dns-3635/dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94 failed for: [wheezy_udp@dns-test-service.dns-3635.svc.cluster.local wheezy_tcp@dns-test-service.dns-3635.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3635.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3635.svc.cluster.local jessie_udp@dns-test-service.dns-3635.svc.cluster.local jessie_tcp@dns-test-service.dns-3635.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3635.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3635.svc.cluster.local] + +Oct 27 14:14:34.210: INFO: Unable to read wheezy_udp@dns-test-service.dns-3635.svc.cluster.local from pod dns-3635/dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94: the server could not find the requested resource (get pods dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94) +Oct 27 14:14:34.243: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3635.svc.cluster.local from pod dns-3635/dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94: the server could not find the requested resource (get pods dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94) +Oct 27 14:14:34.279: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3635.svc.cluster.local from pod dns-3635/dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94: the server could not find the requested resource (get pods dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94) +Oct 27 14:14:34.343: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3635.svc.cluster.local from pod dns-3635/dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94: the server could not find the requested resource (get pods dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94) +Oct 27 14:14:34.584: INFO: Unable to read jessie_udp@dns-test-service.dns-3635.svc.cluster.local from pod dns-3635/dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94: the server could not find the requested resource (get pods dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94) +Oct 27 14:14:34.623: INFO: Unable to read jessie_tcp@dns-test-service.dns-3635.svc.cluster.local from pod dns-3635/dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94: the server could not find the requested resource (get pods dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94) +Oct 27 14:14:34.657: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3635.svc.cluster.local from pod dns-3635/dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94: the server could not find the requested resource (get pods dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94) +Oct 27 14:14:34.689: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3635.svc.cluster.local from pod dns-3635/dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94: the server could not find the requested resource (get pods dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94) +Oct 27 14:14:34.901: INFO: Lookups using dns-3635/dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94 failed for: [wheezy_udp@dns-test-service.dns-3635.svc.cluster.local wheezy_tcp@dns-test-service.dns-3635.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3635.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3635.svc.cluster.local jessie_udp@dns-test-service.dns-3635.svc.cluster.local jessie_tcp@dns-test-service.dns-3635.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3635.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3635.svc.cluster.local] + +Oct 27 14:14:39.213: INFO: Unable to read wheezy_udp@dns-test-service.dns-3635.svc.cluster.local from pod dns-3635/dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94: the server could not find the requested resource (get pods dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94) +Oct 27 14:14:39.245: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3635.svc.cluster.local from pod dns-3635/dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94: the server could not find the requested resource (get pods dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94) +Oct 27 14:14:39.319: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3635.svc.cluster.local from pod dns-3635/dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94: the server could not find the requested resource (get pods dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94) +Oct 27 14:14:39.350: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3635.svc.cluster.local from pod dns-3635/dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94: the server could not find the requested resource (get pods dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94) +Oct 27 14:14:39.583: INFO: Unable to read jessie_udp@dns-test-service.dns-3635.svc.cluster.local from pod dns-3635/dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94: the server could not find the requested resource (get pods dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94) +Oct 27 14:14:39.614: INFO: Unable to read jessie_tcp@dns-test-service.dns-3635.svc.cluster.local from pod dns-3635/dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94: the server could not find the requested resource (get pods dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94) +Oct 27 14:14:39.646: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3635.svc.cluster.local from pod dns-3635/dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94: the server could not find the requested resource (get pods dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94) +Oct 27 14:14:39.677: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3635.svc.cluster.local from pod dns-3635/dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94: the server could not find the requested resource (get pods dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94) +Oct 27 14:14:39.870: INFO: Lookups using dns-3635/dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94 failed for: [wheezy_udp@dns-test-service.dns-3635.svc.cluster.local wheezy_tcp@dns-test-service.dns-3635.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3635.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3635.svc.cluster.local jessie_udp@dns-test-service.dns-3635.svc.cluster.local jessie_tcp@dns-test-service.dns-3635.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3635.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3635.svc.cluster.local] + +Oct 27 14:14:44.213: INFO: Unable to read wheezy_udp@dns-test-service.dns-3635.svc.cluster.local from pod dns-3635/dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94: the server could not find the requested resource (get pods dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94) +Oct 27 14:14:44.280: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3635.svc.cluster.local from pod dns-3635/dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94: the server could not find the requested resource (get pods dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94) +Oct 27 14:14:44.310: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3635.svc.cluster.local from pod dns-3635/dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94: the server could not find the requested resource (get pods dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94) +Oct 27 14:14:44.345: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3635.svc.cluster.local from pod dns-3635/dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94: the server could not find the requested resource (get pods dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94) +Oct 27 14:14:44.585: INFO: Unable to read jessie_udp@dns-test-service.dns-3635.svc.cluster.local from pod dns-3635/dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94: the server could not find the requested resource (get pods dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94) +Oct 27 14:14:44.617: INFO: Unable to read jessie_tcp@dns-test-service.dns-3635.svc.cluster.local from pod dns-3635/dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94: the server could not find the requested resource (get pods dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94) +Oct 27 14:14:44.648: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3635.svc.cluster.local from pod dns-3635/dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94: the server could not find the requested resource (get pods dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94) +Oct 27 14:14:44.679: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3635.svc.cluster.local from pod dns-3635/dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94: the server could not find the requested resource (get pods dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94) +Oct 27 14:14:44.873: INFO: Lookups using dns-3635/dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94 failed for: [wheezy_udp@dns-test-service.dns-3635.svc.cluster.local wheezy_tcp@dns-test-service.dns-3635.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3635.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3635.svc.cluster.local jessie_udp@dns-test-service.dns-3635.svc.cluster.local jessie_tcp@dns-test-service.dns-3635.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3635.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3635.svc.cluster.local] + +Oct 27 14:14:49.261: INFO: Unable to read wheezy_udp@dns-test-service.dns-3635.svc.cluster.local from pod dns-3635/dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94: the server could not find the requested resource (get pods dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94) +Oct 27 14:14:49.297: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3635.svc.cluster.local from pod dns-3635/dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94: the server could not find the requested resource (get pods dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94) +Oct 27 14:14:49.360: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3635.svc.cluster.local from pod dns-3635/dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94: the server could not find the requested resource (get pods dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94) +Oct 27 14:14:49.394: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3635.svc.cluster.local from pod dns-3635/dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94: the server could not find the requested resource (get pods dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94) +Oct 27 14:14:49.661: INFO: Unable to read jessie_udp@dns-test-service.dns-3635.svc.cluster.local from pod dns-3635/dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94: the server could not find the requested resource (get pods dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94) +Oct 27 14:14:49.692: INFO: Unable to read jessie_tcp@dns-test-service.dns-3635.svc.cluster.local from pod dns-3635/dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94: the server could not find the requested resource (get pods dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94) +Oct 27 14:14:49.758: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3635.svc.cluster.local from pod dns-3635/dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94: the server could not find the requested resource (get pods dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94) +Oct 27 14:14:49.818: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3635.svc.cluster.local from pod dns-3635/dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94: the server could not find the requested resource (get pods dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94) +Oct 27 14:14:50.021: INFO: Lookups using dns-3635/dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94 failed for: [wheezy_udp@dns-test-service.dns-3635.svc.cluster.local wheezy_tcp@dns-test-service.dns-3635.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3635.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3635.svc.cluster.local jessie_udp@dns-test-service.dns-3635.svc.cluster.local jessie_tcp@dns-test-service.dns-3635.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3635.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3635.svc.cluster.local] + +Oct 27 14:14:54.854: INFO: DNS probes using dns-3635/dns-test-373610f9-b6e2-45c5-8a90-c980ebbfbe94 succeeded + +STEP: deleting the pod +STEP: deleting the test service +STEP: deleting the test headless service +[AfterEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:14:54.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-3635" for this suite. +•{"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":346,"completed":32,"skipped":490,"failed":0} +SS +------------------------------ +[sig-node] Variable Expansion + should allow substituting values in a container's command [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:14:54.940: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename var-expansion +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in var-expansion-3828 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should allow substituting values in a container's command [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test substitution in container's command +Oct 27 14:14:55.150: INFO: Waiting up to 5m0s for pod "var-expansion-62244b99-6e6f-4824-ad66-0d70cd4daf2a" in namespace "var-expansion-3828" to be "Succeeded or Failed" +Oct 27 14:14:55.161: INFO: Pod "var-expansion-62244b99-6e6f-4824-ad66-0d70cd4daf2a": Phase="Pending", Reason="", readiness=false. Elapsed: 11.048583ms +Oct 27 14:14:57.174: INFO: Pod "var-expansion-62244b99-6e6f-4824-ad66-0d70cd4daf2a": Phase="Running", Reason="", readiness=true. Elapsed: 2.023326126s +Oct 27 14:14:59.186: INFO: Pod "var-expansion-62244b99-6e6f-4824-ad66-0d70cd4daf2a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035852885s +STEP: Saw pod success +Oct 27 14:14:59.186: INFO: Pod "var-expansion-62244b99-6e6f-4824-ad66-0d70cd4daf2a" satisfied condition "Succeeded or Failed" +Oct 27 14:14:59.198: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod var-expansion-62244b99-6e6f-4824-ad66-0d70cd4daf2a container dapi-container: +STEP: delete the pod +Oct 27 14:14:59.313: INFO: Waiting for pod var-expansion-62244b99-6e6f-4824-ad66-0d70cd4daf2a to disappear +Oct 27 14:14:59.324: INFO: Pod var-expansion-62244b99-6e6f-4824-ad66-0d70cd4daf2a no longer exists +[AfterEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:14:59.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-3828" for this suite. +•{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":346,"completed":33,"skipped":492,"failed":0} +SSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should verify ResourceQuota with best effort scope. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:14:59.356: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename resourcequota +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-5164 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should verify ResourceQuota with best effort scope. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a ResourceQuota with best effort scope +STEP: Ensuring ResourceQuota status is calculated +STEP: Creating a ResourceQuota with not best effort scope +STEP: Ensuring ResourceQuota status is calculated +STEP: Creating a best-effort pod +STEP: Ensuring resource quota with best effort scope captures the pod usage +STEP: Ensuring resource quota with not best effort ignored the pod usage +STEP: Deleting the pod +STEP: Ensuring resource quota status released the pod usage +STEP: Creating a not best-effort pod +STEP: Ensuring resource quota with not best effort scope captures the pod usage +STEP: Ensuring resource quota with best effort scope ignored the pod usage +STEP: Deleting the pod +STEP: Ensuring resource quota status released the pod usage +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:15:15.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-5164" for this suite. +•{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":346,"completed":34,"skipped":499,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + should be able to convert a non homogeneous list of CRs [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:15:15.780: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename crd-webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-webhook-9144 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 +STEP: Setting up server cert +STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication +STEP: Deploying the custom resource conversion webhook pod +STEP: Wait for the deployment to be ready +Oct 27 14:15:16.573: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set +Oct 27 14:15:18.613: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940916, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940916, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940916, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940916, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-697cdbd8f4\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 27 14:15:21.652: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 +[It] should be able to convert a non homogeneous list of CRs [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:15:21.664: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Creating a v1 custom resource +STEP: Create a v2 custom resource +STEP: List CRs in v1 +STEP: List CRs in v2 +[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:15:25.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-webhook-9144" for this suite. +[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 +•{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":346,"completed":35,"skipped":527,"failed":0} +S +------------------------------ +[sig-apps] Daemon set [Serial] + should run and stop complex daemon [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:15:25.729: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename daemonsets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in daemonsets-3212 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:142 +[It] should run and stop complex daemon [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:15:26.004: INFO: Creating daemon "daemon-set" with a node selector +STEP: Initially, daemon pods should not be running on any nodes. +Oct 27 14:15:26.027: INFO: Number of nodes with available pods: 0 +Oct 27 14:15:26.027: INFO: Number of running nodes: 0, number of available pods: 0 +STEP: Change node label to blue, check that daemon pod is launched. +Oct 27 14:15:26.212: INFO: Number of nodes with available pods: 0 +Oct 27 14:15:26.212: INFO: Node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 is running more than one daemon pod +Oct 27 14:15:27.229: INFO: Number of nodes with available pods: 0 +Oct 27 14:15:27.229: INFO: Node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 is running more than one daemon pod +Oct 27 14:15:28.229: INFO: Number of nodes with available pods: 1 +Oct 27 14:15:28.229: INFO: Number of running nodes: 1, number of available pods: 1 +STEP: Update the node label to green, and wait for daemons to be unscheduled +Oct 27 14:15:28.287: INFO: Number of nodes with available pods: 0 +Oct 27 14:15:28.287: INFO: Number of running nodes: 0, number of available pods: 0 +STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate +Oct 27 14:15:28.312: INFO: Number of nodes with available pods: 0 +Oct 27 14:15:28.312: INFO: Node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 is running more than one daemon pod +Oct 27 14:15:29.324: INFO: Number of nodes with available pods: 0 +Oct 27 14:15:29.324: INFO: Node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 is running more than one daemon pod +Oct 27 14:15:30.328: INFO: Number of nodes with available pods: 0 +Oct 27 14:15:30.328: INFO: Node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 is running more than one daemon pod +Oct 27 14:15:31.325: INFO: Number of nodes with available pods: 0 +Oct 27 14:15:31.325: INFO: Node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 is running more than one daemon pod +Oct 27 14:15:32.325: INFO: Number of nodes with available pods: 0 +Oct 27 14:15:32.325: INFO: Node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 is running more than one daemon pod +Oct 27 14:15:33.324: INFO: Number of nodes with available pods: 1 +Oct 27 14:15:33.324: INFO: Number of running nodes: 1, number of available pods: 1 +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:108 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3212, will wait for the garbage collector to delete the pods +Oct 27 14:15:33.427: INFO: Deleting DaemonSet.extensions daemon-set took: 13.2313ms +Oct 27 14:15:33.528: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.832273ms +Oct 27 14:15:36.240: INFO: Number of nodes with available pods: 0 +Oct 27 14:15:36.240: INFO: Number of running nodes: 0, number of available pods: 0 +Oct 27 14:15:36.251: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"11336"},"items":null} + +Oct 27 14:15:36.263: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"11336"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:15:36.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-3212" for this suite. +•{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":346,"completed":36,"skipped":528,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:15:36.357: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename subpath +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in subpath-6936 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] Atomic writer volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 +STEP: Setting up data +[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating pod pod-subpath-test-configmap-kxt4 +STEP: Creating a pod to test atomic-volume-subpath +Oct 27 14:15:36.588: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-kxt4" in namespace "subpath-6936" to be "Succeeded or Failed" +Oct 27 14:15:36.599: INFO: Pod "pod-subpath-test-configmap-kxt4": Phase="Pending", Reason="", readiness=false. Elapsed: 11.011369ms +Oct 27 14:15:38.611: INFO: Pod "pod-subpath-test-configmap-kxt4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023389583s +Oct 27 14:15:40.624: INFO: Pod "pod-subpath-test-configmap-kxt4": Phase="Running", Reason="", readiness=true. Elapsed: 4.036035338s +Oct 27 14:15:42.637: INFO: Pod "pod-subpath-test-configmap-kxt4": Phase="Running", Reason="", readiness=true. Elapsed: 6.048993473s +Oct 27 14:15:44.649: INFO: Pod "pod-subpath-test-configmap-kxt4": Phase="Running", Reason="", readiness=true. Elapsed: 8.060912246s +Oct 27 14:15:46.666: INFO: Pod "pod-subpath-test-configmap-kxt4": Phase="Running", Reason="", readiness=true. Elapsed: 10.078260189s +Oct 27 14:15:48.679: INFO: Pod "pod-subpath-test-configmap-kxt4": Phase="Running", Reason="", readiness=true. Elapsed: 12.091019621s +Oct 27 14:15:50.692: INFO: Pod "pod-subpath-test-configmap-kxt4": Phase="Running", Reason="", readiness=true. Elapsed: 14.103785764s +Oct 27 14:15:52.704: INFO: Pod "pod-subpath-test-configmap-kxt4": Phase="Running", Reason="", readiness=true. Elapsed: 16.116269956s +Oct 27 14:15:54.716: INFO: Pod "pod-subpath-test-configmap-kxt4": Phase="Running", Reason="", readiness=true. Elapsed: 18.128441971s +Oct 27 14:15:56.729: INFO: Pod "pod-subpath-test-configmap-kxt4": Phase="Running", Reason="", readiness=true. Elapsed: 20.141211639s +Oct 27 14:15:58.741: INFO: Pod "pod-subpath-test-configmap-kxt4": Phase="Running", Reason="", readiness=true. Elapsed: 22.153570255s +Oct 27 14:16:00.755: INFO: Pod "pod-subpath-test-configmap-kxt4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.166665583s +STEP: Saw pod success +Oct 27 14:16:00.755: INFO: Pod "pod-subpath-test-configmap-kxt4" satisfied condition "Succeeded or Failed" +Oct 27 14:16:00.767: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod pod-subpath-test-configmap-kxt4 container test-container-subpath-configmap-kxt4: +STEP: delete the pod +Oct 27 14:16:00.834: INFO: Waiting for pod pod-subpath-test-configmap-kxt4 to disappear +Oct 27 14:16:00.848: INFO: Pod pod-subpath-test-configmap-kxt4 no longer exists +STEP: Deleting pod pod-subpath-test-configmap-kxt4 +Oct 27 14:16:00.848: INFO: Deleting pod "pod-subpath-test-configmap-kxt4" in namespace "subpath-6936" +[AfterEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:16:00.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "subpath-6936" for this suite. +•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":346,"completed":37,"skipped":557,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + patching/updating a validating webhook should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:16:00.894: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-2734 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 27 14:16:01.752: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +Oct 27 14:16:03.800: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940961, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940961, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940961, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940961, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 27 14:16:06.835: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] patching/updating a validating webhook should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a validating webhook configuration +STEP: Creating a configMap that does not comply to the validation webhook rules +STEP: Updating a validating webhook configuration's rules to not include the create operation +STEP: Creating a configMap that does not comply to the validation webhook rules +STEP: Patching a validating webhook configuration's rules to include the create operation +STEP: Creating a configMap that does not comply to the validation webhook rules +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:16:07.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-2734" for this suite. +STEP: Destroying namespace "webhook-2734-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":346,"completed":38,"skipped":568,"failed":0} + +------------------------------ +[sig-storage] Downward API volume + should provide container's memory limit [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:16:07.468: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-6147 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should provide container's memory limit [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 27 14:16:07.693: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0381e987-6702-4e38-9d3a-8a3357bf03d9" in namespace "downward-api-6147" to be "Succeeded or Failed" +Oct 27 14:16:07.710: INFO: Pod "downwardapi-volume-0381e987-6702-4e38-9d3a-8a3357bf03d9": Phase="Pending", Reason="", readiness=false. Elapsed: 17.528184ms +Oct 27 14:16:09.723: INFO: Pod "downwardapi-volume-0381e987-6702-4e38-9d3a-8a3357bf03d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030059262s +Oct 27 14:16:11.736: INFO: Pod "downwardapi-volume-0381e987-6702-4e38-9d3a-8a3357bf03d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042920965s +STEP: Saw pod success +Oct 27 14:16:11.736: INFO: Pod "downwardapi-volume-0381e987-6702-4e38-9d3a-8a3357bf03d9" satisfied condition "Succeeded or Failed" +Oct 27 14:16:11.747: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod downwardapi-volume-0381e987-6702-4e38-9d3a-8a3357bf03d9 container client-container: +STEP: delete the pod +Oct 27 14:16:11.820: INFO: Waiting for pod downwardapi-volume-0381e987-6702-4e38-9d3a-8a3357bf03d9 to disappear +Oct 27 14:16:11.831: INFO: Pod downwardapi-volume-0381e987-6702-4e38-9d3a-8a3357bf03d9 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:16:11.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-6147" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":346,"completed":39,"skipped":568,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-auth] ServiceAccounts + ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:16:11.866: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename svcaccounts +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in svcaccounts-8136 +STEP: Waiting for a default service account to be provisioned in namespace +[It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:16:12.079: INFO: created pod +Oct 27 14:16:12.080: INFO: Waiting up to 5m0s for pod "oidc-discovery-validator" in namespace "svcaccounts-8136" to be "Succeeded or Failed" +Oct 27 14:16:12.091: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 11.348266ms +Oct 27 14:16:14.103: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02362338s +Oct 27 14:16:16.119: INFO: Pod "oidc-discovery-validator": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039457363s +STEP: Saw pod success +Oct 27 14:16:16.119: INFO: Pod "oidc-discovery-validator" satisfied condition "Succeeded or Failed" +Oct 27 14:16:46.120: INFO: polling logs +Oct 27 14:16:46.198: INFO: Pod logs: +2021/10/27 14:16:13 OK: Got token +2021/10/27 14:16:13 validating with in-cluster discovery +2021/10/27 14:16:13 OK: got issuer https://api.tmgxs-skc.it.internal.staging.k8s.ondemand.com +2021/10/27 14:16:13 Full, not-validated claims: +openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://api.tmgxs-skc.it.internal.staging.k8s.ondemand.com", Subject:"system:serviceaccount:svcaccounts-8136:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1635344772, NotBefore:1635344172, IssuedAt:1635344172, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-8136", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"32e6d610-d94b-42ea-b169-3475e92cd434"}}} +2021/10/27 14:16:13 OK: Constructed OIDC provider for issuer https://api.tmgxs-skc.it.internal.staging.k8s.ondemand.com +2021/10/27 14:16:13 OK: Validated signature on JWT +2021/10/27 14:16:13 OK: Got valid claims from token! +2021/10/27 14:16:13 Full, validated claims: +&openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://api.tmgxs-skc.it.internal.staging.k8s.ondemand.com", Subject:"system:serviceaccount:svcaccounts-8136:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1635344772, NotBefore:1635344172, IssuedAt:1635344172, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-8136", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"32e6d610-d94b-42ea-b169-3475e92cd434"}}} + +Oct 27 14:16:46.198: INFO: completed pod +[AfterEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:16:46.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svcaccounts-8136" for this suite. +•{"msg":"PASSED [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","total":346,"completed":40,"skipped":581,"failed":0} + +------------------------------ +[sig-node] Pods + should be updated [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:16:46.244: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename pods +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-6595 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:188 +[It] should be updated [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pod +STEP: submitting the pod to kubernetes +Oct 27 14:16:46.457: INFO: The status of Pod pod-update-7994bd19-b608-4d37-a3d4-90a0488ae2cb is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:16:48.470: INFO: The status of Pod pod-update-7994bd19-b608-4d37-a3d4-90a0488ae2cb is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:16:50.469: INFO: The status of Pod pod-update-7994bd19-b608-4d37-a3d4-90a0488ae2cb is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:16:52.471: INFO: The status of Pod pod-update-7994bd19-b608-4d37-a3d4-90a0488ae2cb is Running (Ready = true) +STEP: verifying the pod is in kubernetes +STEP: updating the pod +Oct 27 14:16:53.021: INFO: Successfully updated pod "pod-update-7994bd19-b608-4d37-a3d4-90a0488ae2cb" +STEP: verifying the updated pod is in kubernetes +Oct 27 14:16:53.043: INFO: Pod update OK +[AfterEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:16:53.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-6595" for this suite. +•{"msg":"PASSED [sig-node] Pods should be updated [NodeConformance] [Conformance]","total":346,"completed":41,"skipped":581,"failed":0} +SS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + should validate Statefulset Status endpoints [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:16:53.081: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename statefulset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-1227 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:92 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:107 +STEP: Creating service test in namespace statefulset-1227 +[It] should validate Statefulset Status endpoints [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating statefulset ss in namespace statefulset-1227 +Oct 27 14:16:53.309: INFO: Found 0 stateful pods, waiting for 1 +Oct 27 14:17:03.322: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: Patch Statefulset to include a label +STEP: Getting /status +Oct 27 14:17:03.376: INFO: StatefulSet ss has Conditions: []v1.StatefulSetCondition(nil) +STEP: updating the StatefulSet Status +Oct 27 14:17:03.400: INFO: updatedStatus.Conditions: []v1.StatefulSetCondition{v1.StatefulSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"E2E", Message:"Set from e2e test"}} +STEP: watching for the statefulset status to be updated +Oct 27 14:17:03.411: INFO: Observed &StatefulSet event: ADDED +Oct 27 14:17:03.411: INFO: Found Statefulset ss in namespace statefulset-1227 with labels: map[e2e:testing] annotations: map[] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} +Oct 27 14:17:03.411: INFO: Statefulset ss has an updated status +STEP: patching the Statefulset Status +Oct 27 14:17:03.411: INFO: Patch payload: {"status":{"conditions":[{"type":"StatusPatched","status":"True"}]}} +Oct 27 14:17:03.427: INFO: Patched status conditions: []v1.StatefulSetCondition{v1.StatefulSetCondition{Type:"StatusPatched", Status:"True", LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"", Message:""}} +STEP: watching for the Statefulset status to be patched +Oct 27 14:17:03.438: INFO: Observed &StatefulSet event: ADDED +Oct 27 14:17:03.438: INFO: Observed Statefulset ss in namespace statefulset-1227 with annotations: map[] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} +Oct 27 14:17:03.438: INFO: Observed &StatefulSet event: MODIFIED +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118 +Oct 27 14:17:03.438: INFO: Deleting all statefulset in ns statefulset-1227 +Oct 27 14:17:03.449: INFO: Scaling statefulset ss to 0 +Oct 27 14:17:13.512: INFO: Waiting for statefulset status.replicas updated to 0 +Oct 27 14:17:13.523: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:17:13.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-1227" for this suite. +•{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]","total":346,"completed":42,"skipped":583,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] InitContainer [NodeConformance] + should invoke init containers on a RestartNever pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:17:13.600: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename init-container +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in init-container-8289 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 +[It] should invoke init containers on a RestartNever pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pod +Oct 27 14:17:13.782: INFO: PodSpec: initContainers in spec.initContainers +[AfterEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:17:18.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "init-container-8289" for this suite. +•{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":346,"completed":43,"skipped":622,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-node] Kubelet when scheduling a busybox command that always fails in a pod + should have an terminated reason [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:17:18.557: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubelet-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubelet-test-1000 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 +[BeforeEach] when scheduling a busybox command that always fails in a pod + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:82 +[It] should have an terminated reason [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:17:22.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubelet-test-1000" for this suite. +•{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":346,"completed":44,"skipped":634,"failed":0} + +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a replication controller. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:17:22.823: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename resourcequota +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-8386 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Counting existing ResourceQuota +STEP: Creating a ResourceQuota +STEP: Ensuring resource quota status is calculated +STEP: Creating a ReplicationController +STEP: Ensuring resource quota status captures replication controller creation +STEP: Deleting a ReplicationController +STEP: Ensuring resource quota status released usage +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:17:34.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-8386" for this suite. +•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":346,"completed":45,"skipped":634,"failed":0} +SSSSS +------------------------------ +[sig-network] Services + should be able to create a functioning NodePort service [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:17:34.183: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-9547 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should be able to create a functioning NodePort service [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating service nodeport-test with type=NodePort in namespace services-9547 +STEP: creating replication controller nodeport-test in namespace services-9547 +I1027 14:17:34.405491 5768 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-9547, replica count: 2 +Oct 27 14:17:37.456: INFO: Creating new exec pod +I1027 14:17:37.456921 5768 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Oct 27 14:17:42.518: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-9547 exec execpodmpb2q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' +Oct 27 14:17:43.059: INFO: stderr: "+ nc -v -t -w 2 nodeport-test 80\n+ echo hostName\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n" +Oct 27 14:17:43.059: INFO: stdout: "" +Oct 27 14:17:44.060: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-9547 exec execpodmpb2q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' +Oct 27 14:17:44.671: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n" +Oct 27 14:17:44.671: INFO: stdout: "" +Oct 27 14:17:45.060: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-9547 exec execpodmpb2q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' +Oct 27 14:17:45.588: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n" +Oct 27 14:17:45.588: INFO: stdout: "nodeport-test-7wc5s" +Oct 27 14:17:45.588: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-9547 exec execpodmpb2q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.68.105.157 80' +Oct 27 14:17:46.112: INFO: stderr: "+ nc -v -t -w 2 100.68.105.157 80\n+ echo hostName\nConnection to 100.68.105.157 80 port [tcp/http] succeeded!\n" +Oct 27 14:17:46.112: INFO: stdout: "nodeport-test-bql55" +Oct 27 14:17:46.112: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-9547 exec execpodmpb2q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.250.0.5 30422' +Oct 27 14:17:46.614: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.250.0.5 30422\nConnection to 10.250.0.5 30422 port [tcp/*] succeeded!\n" +Oct 27 14:17:46.614: INFO: stdout: "nodeport-test-bql55" +Oct 27 14:17:46.614: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-9547 exec execpodmpb2q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.250.0.4 30422' +Oct 27 14:17:47.203: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.250.0.4 30422\nConnection to 10.250.0.4 30422 port [tcp/*] succeeded!\n" +Oct 27 14:17:47.203: INFO: stdout: "" +Oct 27 14:17:48.203: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-9547 exec execpodmpb2q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.250.0.4 30422' +Oct 27 14:17:48.763: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.250.0.4 30422\nConnection to 10.250.0.4 30422 port [tcp/*] succeeded!\n" +Oct 27 14:17:48.763: INFO: stdout: "nodeport-test-7wc5s" +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:17:48.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-9547" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":346,"completed":46,"skipped":639,"failed":0} +SSSSSS +------------------------------ +[sig-network] EndpointSlice + should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:17:48.799: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename endpointslice +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in endpointslice-2917 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 +[It] should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: referencing a single matching pod +STEP: referencing matching pods with named port +STEP: creating empty Endpoints and EndpointSlices for no matching Pods +STEP: recreating EndpointSlices after they've been deleted +Oct 27 14:18:09.248: INFO: EndpointSlice for Service endpointslice-2917/example-named-port not found +[AfterEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:18:19.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "endpointslice-2917" for this suite. +•{"msg":"PASSED [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","total":346,"completed":47,"skipped":645,"failed":0} +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + removes definition from spec when one version gets changed to not be served [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:18:19.306: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-4139 +STEP: Waiting for a default service account to be provisioned in namespace +[It] removes definition from spec when one version gets changed to not be served [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: set up a multi version CRD +Oct 27 14:18:19.495: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: mark a version not serverd +STEP: check the unserved version gets removed +STEP: check the other version is not changed +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:18:38.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-4139" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":346,"completed":48,"skipped":665,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:18:38.435: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename gc +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-6290 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the rc1 +STEP: create the rc2 +STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well +STEP: delete the rc simpletest-rc-to-be-deleted +STEP: wait for the rc to be deleted +STEP: Gathering metrics +Oct 27 14:18:48.830: INFO: For apiserver_request_total: +For apiserver_request_latency_seconds: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +W1027 14:18:48.830152 5768 metrics_grabber.go:151] Can't find kube-controller-manager pod. Grabbing metrics from kube-controller-manager is disabled. +Oct 27 14:18:48.830: INFO: Deleting pod "simpletest-rc-to-be-deleted-2tbbp" in namespace "gc-6290" +Oct 27 14:18:48.848: INFO: Deleting pod "simpletest-rc-to-be-deleted-78khn" in namespace "gc-6290" +Oct 27 14:18:48.870: INFO: Deleting pod "simpletest-rc-to-be-deleted-bxp94" in namespace "gc-6290" +Oct 27 14:18:48.886: INFO: Deleting pod "simpletest-rc-to-be-deleted-klvhs" in namespace "gc-6290" +Oct 27 14:18:48.905: INFO: Deleting pod "simpletest-rc-to-be-deleted-l8nc7" in namespace "gc-6290" +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:18:48.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-6290" for this suite. +•{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":346,"completed":49,"skipped":696,"failed":0} +SS +------------------------------ +[sig-node] Secrets + should patch a secret [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:18:49.015: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-3342 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should patch a secret [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a secret +STEP: listing secrets in all namespaces to ensure that there are more than zero +STEP: patching the secret +STEP: deleting the secret using a LabelSelector +STEP: listing secrets in all namespaces, searching for label name and value in patch +[AfterEach] [sig-node] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:18:49.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-3342" for this suite. +•{"msg":"PASSED [sig-node] Secrets should patch a secret [Conformance]","total":346,"completed":50,"skipped":698,"failed":0} +SSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should mutate custom resource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:18:49.408: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-9220 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 27 14:18:50.147: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941130, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941130, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941130, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941130, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 27 14:18:52.159: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941130, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941130, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941130, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941130, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 27 14:18:55.183: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should mutate custom resource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:18:55.195: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Registering the mutating webhook for custom resource e2e-test-webhook-932-crds.webhook.example.com via the AdmissionRegistration API +STEP: Creating a custom resource that should be mutated by the webhook +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:18:58.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-9220" for this suite. +STEP: Destroying namespace "webhook-9220-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":346,"completed":51,"skipped":702,"failed":0} + +------------------------------ +[sig-network] DNS + should provide DNS for the cluster [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:18:58.527: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename dns +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-6809 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide DNS for the cluster [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6809.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6809.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done + +STEP: creating a pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Oct 27 14:19:03.179: INFO: DNS probes using dns-6809/dns-test-a6b5b5a3-6678-499f-8b51-7b0e35d032b9 succeeded + +STEP: deleting the pod +[AfterEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:19:03.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-6809" for this suite. +•{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":346,"completed":52,"skipped":702,"failed":0} +SSSSSS +------------------------------ +[sig-node] PreStop + should call prestop when killing a pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] PreStop + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:19:03.237: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename prestop +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in prestop-5589 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] PreStop + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:157 +[It] should call prestop when killing a pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating server pod server in namespace prestop-5589 +STEP: Waiting for pods to come up. +STEP: Creating tester pod tester in namespace prestop-5589 +STEP: Deleting pre-stop pod +Oct 27 14:19:16.661: INFO: Saw: { + "Hostname": "server", + "Sent": null, + "Received": { + "prestop": 1 + }, + "Errors": null, + "Log": [ + "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", + "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", + "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." + ], + "StillContactingPeers": true +} +STEP: Deleting the server pod +[AfterEach] [sig-node] PreStop + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:19:16.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "prestop-5589" for this suite. +•{"msg":"PASSED [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":346,"completed":53,"skipped":708,"failed":0} +SSS +------------------------------ +[sig-cli] Kubectl client Kubectl logs + should be able to retrieve and filter logs [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:19:16.713: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-7754 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[BeforeEach] Kubectl logs + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1396 +STEP: creating an pod +Oct 27 14:19:16.910: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-7754 run logs-generator --image=k8s.gcr.io/e2e-test-images/agnhost:2.32 --restart=Never --pod-running-timeout=2m0s -- logs-generator --log-lines-total 100 --run-duration 20s' +Oct 27 14:19:17.318: INFO: stderr: "" +Oct 27 14:19:17.318: INFO: stdout: "pod/logs-generator created\n" +[It] should be able to retrieve and filter logs [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Waiting for log generator to start. +Oct 27 14:19:17.318: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] +Oct 27 14:19:17.318: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-7754" to be "running and ready, or succeeded" +Oct 27 14:19:17.329: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 10.970816ms +Oct 27 14:19:19.342: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023996548s +Oct 27 14:19:21.356: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.037389051s +Oct 27 14:19:21.356: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" +Oct 27 14:19:21.356: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] +STEP: checking for a matching strings +Oct 27 14:19:21.356: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-7754 logs logs-generator logs-generator' +Oct 27 14:19:21.502: INFO: stderr: "" +Oct 27 14:19:21.502: INFO: stdout: "I1027 14:19:19.314269 1 logs_generator.go:76] 0 POST /api/v1/namespaces/kube-system/pods/25c 264\nI1027 14:19:19.514427 1 logs_generator.go:76] 1 POST /api/v1/namespaces/ns/pods/ddx 213\nI1027 14:19:19.714554 1 logs_generator.go:76] 2 POST /api/v1/namespaces/ns/pods/xnb7 250\nI1027 14:19:19.915018 1 logs_generator.go:76] 3 POST /api/v1/namespaces/ns/pods/nh7 418\nI1027 14:19:20.114621 1 logs_generator.go:76] 4 POST /api/v1/namespaces/kube-system/pods/7lx4 220\nI1027 14:19:20.315019 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/kube-system/pods/br8 498\nI1027 14:19:20.514687 1 logs_generator.go:76] 6 POST /api/v1/namespaces/kube-system/pods/pw4h 368\nI1027 14:19:20.715136 1 logs_generator.go:76] 7 POST /api/v1/namespaces/default/pods/cpx 599\nI1027 14:19:20.914439 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/9sc4 597\nI1027 14:19:21.115016 1 logs_generator.go:76] 9 GET /api/v1/namespaces/ns/pods/jl8q 296\nI1027 14:19:21.314318 1 logs_generator.go:76] 10 POST /api/v1/namespaces/ns/pods/ttv 301\n" +STEP: limiting log lines +Oct 27 14:19:21.502: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-7754 logs logs-generator logs-generator --tail=1' +Oct 27 14:19:21.696: INFO: stderr: "" +Oct 27 14:19:21.696: INFO: stdout: "I1027 14:19:21.514751 1 logs_generator.go:76] 11 GET /api/v1/namespaces/kube-system/pods/xww 462\n" +Oct 27 14:19:21.696: INFO: got output "I1027 14:19:21.514751 1 logs_generator.go:76] 11 GET /api/v1/namespaces/kube-system/pods/xww 462\n" +STEP: limiting log bytes +Oct 27 14:19:21.696: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-7754 logs logs-generator logs-generator --limit-bytes=1' +Oct 27 14:19:21.845: INFO: stderr: "" +Oct 27 14:19:21.845: INFO: stdout: "I" +Oct 27 14:19:21.845: INFO: got output "I" +STEP: exposing timestamps +Oct 27 14:19:21.845: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-7754 logs logs-generator logs-generator --tail=1 --timestamps' +Oct 27 14:19:21.984: INFO: stderr: "" +Oct 27 14:19:21.984: INFO: stdout: "2021-10-27T14:19:21.916185048Z I1027 14:19:21.915962 1 logs_generator.go:76] 13 GET /api/v1/namespaces/ns/pods/rlf 485\n" +Oct 27 14:19:21.984: INFO: got output "2021-10-27T14:19:21.916185048Z I1027 14:19:21.915962 1 logs_generator.go:76] 13 GET /api/v1/namespaces/ns/pods/rlf 485\n" +STEP: restricting to a time range +Oct 27 14:19:24.484: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-7754 logs logs-generator logs-generator --since=1s' +Oct 27 14:19:24.636: INFO: stderr: "" +Oct 27 14:19:24.636: INFO: stdout: "I1027 14:19:23.715116 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/default/pods/2ftx 401\nI1027 14:19:23.914449 1 logs_generator.go:76] 23 GET /api/v1/namespaces/default/pods/56w 562\nI1027 14:19:24.114899 1 logs_generator.go:76] 24 GET /api/v1/namespaces/kube-system/pods/xxp 274\nI1027 14:19:24.314640 1 logs_generator.go:76] 25 PUT /api/v1/namespaces/ns/pods/bl2f 307\nI1027 14:19:24.515024 1 logs_generator.go:76] 26 PUT /api/v1/namespaces/ns/pods/9lv 229\n" +Oct 27 14:19:24.636: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-7754 logs logs-generator logs-generator --since=24h' +Oct 27 14:19:24.784: INFO: stderr: "" +Oct 27 14:19:24.784: INFO: stdout: "I1027 14:19:19.314269 1 logs_generator.go:76] 0 POST /api/v1/namespaces/kube-system/pods/25c 264\nI1027 14:19:19.514427 1 logs_generator.go:76] 1 POST /api/v1/namespaces/ns/pods/ddx 213\nI1027 14:19:19.714554 1 logs_generator.go:76] 2 POST /api/v1/namespaces/ns/pods/xnb7 250\nI1027 14:19:19.915018 1 logs_generator.go:76] 3 POST /api/v1/namespaces/ns/pods/nh7 418\nI1027 14:19:20.114621 1 logs_generator.go:76] 4 POST /api/v1/namespaces/kube-system/pods/7lx4 220\nI1027 14:19:20.315019 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/kube-system/pods/br8 498\nI1027 14:19:20.514687 1 logs_generator.go:76] 6 POST /api/v1/namespaces/kube-system/pods/pw4h 368\nI1027 14:19:20.715136 1 logs_generator.go:76] 7 POST /api/v1/namespaces/default/pods/cpx 599\nI1027 14:19:20.914439 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/9sc4 597\nI1027 14:19:21.115016 1 logs_generator.go:76] 9 GET /api/v1/namespaces/ns/pods/jl8q 296\nI1027 14:19:21.314318 1 logs_generator.go:76] 10 POST /api/v1/namespaces/ns/pods/ttv 301\nI1027 14:19:21.514751 1 logs_generator.go:76] 11 GET /api/v1/namespaces/kube-system/pods/xww 462\nI1027 14:19:21.715134 1 logs_generator.go:76] 12 GET /api/v1/namespaces/default/pods/6nc 288\nI1027 14:19:21.915962 1 logs_generator.go:76] 13 GET /api/v1/namespaces/ns/pods/rlf 485\nI1027 14:19:22.114359 1 logs_generator.go:76] 14 PUT /api/v1/namespaces/kube-system/pods/l8h5 229\nI1027 14:19:22.315388 1 logs_generator.go:76] 15 POST /api/v1/namespaces/kube-system/pods/wdg 506\nI1027 14:19:22.514727 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/ns/pods/cvbw 235\nI1027 14:19:22.715262 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/default/pods/496 399\nI1027 14:19:22.914317 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/default/pods/dwx4 544\nI1027 14:19:23.114759 1 logs_generator.go:76] 19 POST /api/v1/namespaces/kube-system/pods/gzx 596\nI1027 14:19:23.316039 1 logs_generator.go:76] 20 GET /api/v1/namespaces/default/pods/zn62 474\nI1027 14:19:23.514389 1 logs_generator.go:76] 21 POST /api/v1/namespaces/ns/pods/6b5j 310\nI1027 14:19:23.715116 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/default/pods/2ftx 401\nI1027 14:19:23.914449 1 logs_generator.go:76] 23 GET /api/v1/namespaces/default/pods/56w 562\nI1027 14:19:24.114899 1 logs_generator.go:76] 24 GET /api/v1/namespaces/kube-system/pods/xxp 274\nI1027 14:19:24.314640 1 logs_generator.go:76] 25 PUT /api/v1/namespaces/ns/pods/bl2f 307\nI1027 14:19:24.515024 1 logs_generator.go:76] 26 PUT /api/v1/namespaces/ns/pods/9lv 229\nI1027 14:19:24.714318 1 logs_generator.go:76] 27 GET /api/v1/namespaces/default/pods/kz4 285\n" +[AfterEach] Kubectl logs + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1401 +Oct 27 14:19:24.784: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-7754 delete pod logs-generator' +Oct 27 14:19:26.090: INFO: stderr: "" +Oct 27 14:19:26.090: INFO: stdout: "pod \"logs-generator\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:19:26.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-7754" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":346,"completed":54,"skipped":711,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl diff + should check if kubectl diff finds a difference for Deployments [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:19:26.130: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-2061 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should check if kubectl diff finds a difference for Deployments [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create deployment with httpd image +Oct 27 14:19:26.315: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-2061 create -f -' +Oct 27 14:19:26.507: INFO: stderr: "" +Oct 27 14:19:26.508: INFO: stdout: "deployment.apps/httpd-deployment created\n" +STEP: verify diff finds difference between live and declared image +Oct 27 14:19:26.508: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-2061 diff -f -' +Oct 27 14:19:26.722: INFO: rc: 1 +Oct 27 14:19:26.722: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-2061 delete -f -' +Oct 27 14:19:26.818: INFO: stderr: "" +Oct 27 14:19:26.818: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:19:26.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-2061" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":346,"completed":55,"skipped":748,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Update Demo + should create and stop a replication controller [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:19:26.853: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-9826 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[BeforeEach] Update Demo + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:296 +[It] should create and stop a replication controller [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a replication controller +Oct 27 14:19:27.038: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9826 create -f -' +Oct 27 14:19:27.240: INFO: stderr: "" +Oct 27 14:19:27.240: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" +STEP: waiting for all containers in name=update-demo pods to come up. +Oct 27 14:19:27.240: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9826 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 27 14:19:27.337: INFO: stderr: "" +Oct 27 14:19:27.337: INFO: stdout: "update-demo-nautilus-2chzb update-demo-nautilus-v57jn " +Oct 27 14:19:27.337: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9826 get pods update-demo-nautilus-2chzb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 27 14:19:27.424: INFO: stderr: "" +Oct 27 14:19:27.424: INFO: stdout: "" +Oct 27 14:19:27.424: INFO: update-demo-nautilus-2chzb is created but not running +Oct 27 14:19:32.426: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9826 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 27 14:19:32.528: INFO: stderr: "" +Oct 27 14:19:32.528: INFO: stdout: "update-demo-nautilus-2chzb update-demo-nautilus-v57jn " +Oct 27 14:19:32.528: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9826 get pods update-demo-nautilus-2chzb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 27 14:19:32.622: INFO: stderr: "" +Oct 27 14:19:32.622: INFO: stdout: "" +Oct 27 14:19:32.622: INFO: update-demo-nautilus-2chzb is created but not running +Oct 27 14:19:37.624: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9826 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 27 14:19:37.740: INFO: stderr: "" +Oct 27 14:19:37.740: INFO: stdout: "update-demo-nautilus-2chzb update-demo-nautilus-v57jn " +Oct 27 14:19:37.740: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9826 get pods update-demo-nautilus-2chzb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 27 14:19:37.829: INFO: stderr: "" +Oct 27 14:19:37.829: INFO: stdout: "" +Oct 27 14:19:37.829: INFO: update-demo-nautilus-2chzb is created but not running +Oct 27 14:19:42.832: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9826 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 27 14:19:42.930: INFO: stderr: "" +Oct 27 14:19:42.930: INFO: stdout: "update-demo-nautilus-2chzb update-demo-nautilus-v57jn " +Oct 27 14:19:42.931: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9826 get pods update-demo-nautilus-2chzb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 27 14:19:43.019: INFO: stderr: "" +Oct 27 14:19:43.019: INFO: stdout: "true" +Oct 27 14:19:43.019: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9826 get pods update-demo-nautilus-2chzb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Oct 27 14:19:43.108: INFO: stderr: "" +Oct 27 14:19:43.108: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" +Oct 27 14:19:43.108: INFO: validating pod update-demo-nautilus-2chzb +Oct 27 14:19:43.225: INFO: got data: { + "image": "nautilus.jpg" +} + +Oct 27 14:19:43.225: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Oct 27 14:19:43.225: INFO: update-demo-nautilus-2chzb is verified up and running +Oct 27 14:19:43.225: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9826 get pods update-demo-nautilus-v57jn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 27 14:19:43.318: INFO: stderr: "" +Oct 27 14:19:43.318: INFO: stdout: "true" +Oct 27 14:19:43.319: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9826 get pods update-demo-nautilus-v57jn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Oct 27 14:19:43.408: INFO: stderr: "" +Oct 27 14:19:43.408: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" +Oct 27 14:19:43.408: INFO: validating pod update-demo-nautilus-v57jn +Oct 27 14:19:43.565: INFO: got data: { + "image": "nautilus.jpg" +} + +Oct 27 14:19:43.565: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Oct 27 14:19:43.565: INFO: update-demo-nautilus-v57jn is verified up and running +STEP: using delete to clean up resources +Oct 27 14:19:43.565: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9826 delete --grace-period=0 --force -f -' +Oct 27 14:19:43.670: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Oct 27 14:19:43.670: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" +Oct 27 14:19:43.670: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9826 get rc,svc -l name=update-demo --no-headers' +Oct 27 14:19:43.769: INFO: stderr: "No resources found in kubectl-9826 namespace.\n" +Oct 27 14:19:43.769: INFO: stdout: "" +Oct 27 14:19:43.769: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9826 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' +Oct 27 14:19:43.868: INFO: stderr: "" +Oct 27 14:19:43.868: INFO: stdout: "" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:19:43.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-9826" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":346,"completed":56,"skipped":760,"failed":0} +SSS +------------------------------ +[sig-node] Kubelet when scheduling a read only busybox container + should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:19:43.902: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubelet-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubelet-test-3791 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 +[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:19:44.116: INFO: The status of Pod busybox-readonly-fsc24bfac3-2aa1-4056-a30e-b05a9b180510 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:19:46.132: INFO: The status of Pod busybox-readonly-fsc24bfac3-2aa1-4056-a30e-b05a9b180510 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:19:48.129: INFO: The status of Pod busybox-readonly-fsc24bfac3-2aa1-4056-a30e-b05a9b180510 is Running (Ready = true) +[AfterEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:19:48.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubelet-test-3791" for this suite. +•{"msg":"PASSED [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":57,"skipped":763,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:19:48.228: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-5734 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 27 14:19:48.771: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941188, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941188, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941188, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941188, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 27 14:19:50.786: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941188, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941188, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941188, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941188, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 27 14:19:53.806: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API +STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API +STEP: Creating a dummy validating-webhook-configuration object +STEP: Deleting the validating-webhook-configuration, which should be possible to remove +STEP: Creating a dummy mutating-webhook-configuration object +STEP: Deleting the mutating-webhook-configuration, which should be possible to remove +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:19:54.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-5734" for this suite. +STEP: Destroying namespace "webhook-5734-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":346,"completed":58,"skipped":776,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Kubelet when scheduling a busybox command that always fails in a pod + should be possible to delete [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:19:54.473: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubelet-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubelet-test-7113 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 +[BeforeEach] when scheduling a busybox command that always fails in a pod + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:82 +[It] should be possible to delete [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:19:54.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubelet-test-7113" for this suite. +•{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":346,"completed":59,"skipped":859,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-apps] DisruptionController + should create a PodDisruptionBudget [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:19:54.754: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename disruption +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in disruption-3917 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 +[It] should create a PodDisruptionBudget [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pdb +STEP: Waiting for the pdb to be processed +STEP: updating the pdb +STEP: Waiting for the pdb to be processed +STEP: patching the pdb +STEP: Waiting for the pdb to be processed +STEP: Waiting for the pdb to be deleted +[AfterEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:19:55.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "disruption-3917" for this suite. +•{"msg":"PASSED [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]","total":346,"completed":60,"skipped":874,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Container Runtime blackbox test when starting a container that exits + should run with the expected status [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:19:55.093: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-runtime +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-5464 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should run with the expected status [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' +STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' +STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition +STEP: Container 'terminate-cmd-rpa': should get the expected 'State' +STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] +STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' +STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' +STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition +STEP: Container 'terminate-cmd-rpof': should get the expected 'State' +STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] +STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' +STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' +STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition +STEP: Container 'terminate-cmd-rpn': should get the expected 'State' +STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] +[AfterEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:20:20.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-runtime-5464" for this suite. +•{"msg":"PASSED [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":346,"completed":61,"skipped":901,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Probing container + should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:20:20.943: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-probe +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-1325 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 +[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating pod busybox-7dc3f101-761f-40ea-aa3f-91a021971ee8 in namespace container-probe-1325 +Oct 27 14:20:25.180: INFO: Started pod busybox-7dc3f101-761f-40ea-aa3f-91a021971ee8 in namespace container-probe-1325 +STEP: checking the pod's current state and verifying that restartCount is present +Oct 27 14:20:25.192: INFO: Initial restart count of pod busybox-7dc3f101-761f-40ea-aa3f-91a021971ee8 is 0 +STEP: deleting the pod +[AfterEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:24:26.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-1325" for this suite. +•{"msg":"PASSED [sig-node] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":346,"completed":62,"skipped":961,"failed":0} +SSSSSSS +------------------------------ +[sig-node] Security Context When creating a pod with privileged + should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:24:26.887: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename security-context-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-test-9541 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 +[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:24:27.102: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-de7788b1-0b69-4f8a-94d1-ffbc09ad2473" in namespace "security-context-test-9541" to be "Succeeded or Failed" +Oct 27 14:24:27.112: INFO: Pod "busybox-privileged-false-de7788b1-0b69-4f8a-94d1-ffbc09ad2473": Phase="Pending", Reason="", readiness=false. Elapsed: 10.505806ms +Oct 27 14:24:29.125: INFO: Pod "busybox-privileged-false-de7788b1-0b69-4f8a-94d1-ffbc09ad2473": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.023725874s +Oct 27 14:24:29.125: INFO: Pod "busybox-privileged-false-de7788b1-0b69-4f8a-94d1-ffbc09ad2473" satisfied condition "Succeeded or Failed" +Oct 27 14:24:29.260: INFO: Got logs for pod "busybox-privileged-false-de7788b1-0b69-4f8a-94d1-ffbc09ad2473": "ip: RTNETLINK answers: Operation not permitted\n" +[AfterEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:24:29.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "security-context-test-9541" for this suite. +•{"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":63,"skipped":968,"failed":0} +SS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:24:29.294: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-9342 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0666 on node default medium +Oct 27 14:24:29.503: INFO: Waiting up to 5m0s for pod "pod-d2c3adf2-59a2-477d-ab20-2554833194d5" in namespace "emptydir-9342" to be "Succeeded or Failed" +Oct 27 14:24:29.514: INFO: Pod "pod-d2c3adf2-59a2-477d-ab20-2554833194d5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.833606ms +Oct 27 14:24:31.526: INFO: Pod "pod-d2c3adf2-59a2-477d-ab20-2554833194d5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023038054s +Oct 27 14:24:33.539: INFO: Pod "pod-d2c3adf2-59a2-477d-ab20-2554833194d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035634903s +STEP: Saw pod success +Oct 27 14:24:33.539: INFO: Pod "pod-d2c3adf2-59a2-477d-ab20-2554833194d5" satisfied condition "Succeeded or Failed" +Oct 27 14:24:33.550: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod pod-d2c3adf2-59a2-477d-ab20-2554833194d5 container test-container: +STEP: delete the pod +Oct 27 14:24:33.665: INFO: Waiting for pod pod-d2c3adf2-59a2-477d-ab20-2554833194d5 to disappear +Oct 27 14:24:33.676: INFO: Pod pod-d2c3adf2-59a2-477d-ab20-2554833194d5 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:24:33.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-9342" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":64,"skipped":970,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for multiple CRDs of different groups [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:24:33.710: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-4 +STEP: Waiting for a default service account to be provisioned in namespace +[It] works for multiple CRDs of different groups [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation +Oct 27 14:24:33.896: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 14:24:37.593: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:24:51.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-4" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":346,"completed":65,"skipped":982,"failed":0} +SSSSSSSSS +------------------------------ +[sig-apps] ReplicaSet + should adopt matching pods on creation and release no longer matching pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:24:51.521: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename replicaset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replicaset-366 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should adopt matching pods on creation and release no longer matching pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Given a Pod with a 'name' label pod-adoption-release is created +Oct 27 14:24:51.733: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:24:53.745: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:24:55.745: INFO: The status of Pod pod-adoption-release is Running (Ready = true) +STEP: When a replicaset with a matching selector is created +STEP: Then the orphan pod is adopted +STEP: When the matched label of one of its pods change +Oct 27 14:24:55.790: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 +STEP: Then the pod is released +[AfterEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:24:55.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replicaset-366" for this suite. +•{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":346,"completed":66,"skipped":991,"failed":0} +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + patching/updating a mutating webhook should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:24:55.866: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-3445 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 27 14:24:56.664: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941496, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941496, loc:(*time.Location)(0xa09bc80)}}, Reason:"NewReplicaSetCreated", Message:"Created new replica set \"sample-webhook-deployment-78988fc6cd\""}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941496, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941496, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} +Oct 27 14:24:58.676: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941496, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941496, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941496, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941496, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 27 14:25:01.697: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] patching/updating a mutating webhook should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a mutating webhook configuration +STEP: Updating a mutating webhook configuration's rules to not include the create operation +STEP: Creating a configMap that should not be mutated +STEP: Patching a mutating webhook configuration's rules to include the create operation +STEP: Creating a configMap that should be mutated +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:25:02.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-3445" for this suite. +STEP: Destroying namespace "webhook-3445-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":346,"completed":67,"skipped":1011,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should update pod when spec was updated and update strategy is RollingUpdate [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:25:02.243: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename daemonsets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in daemonsets-6893 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:142 +[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:25:02.473: INFO: Creating simple daemon set daemon-set +STEP: Check that daemon pods launch on every node of the cluster. +Oct 27 14:25:02.517: INFO: Number of nodes with available pods: 0 +Oct 27 14:25:02.517: INFO: Node shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8 is running more than one daemon pod +Oct 27 14:25:03.558: INFO: Number of nodes with available pods: 0 +Oct 27 14:25:03.558: INFO: Node shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8 is running more than one daemon pod +Oct 27 14:25:04.550: INFO: Number of nodes with available pods: 1 +Oct 27 14:25:04.550: INFO: Node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 is running more than one daemon pod +Oct 27 14:25:05.551: INFO: Number of nodes with available pods: 2 +Oct 27 14:25:05.551: INFO: Number of running nodes: 2, number of available pods: 2 +STEP: Update daemon pods image. +STEP: Check that daemon pods images are updated. +Oct 27 14:25:05.636: INFO: Wrong image for pod: daemon-set-ms8tt. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. +Oct 27 14:25:06.662: INFO: Wrong image for pod: daemon-set-ms8tt. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. +Oct 27 14:25:07.662: INFO: Pod daemon-set-9m9rm is not available +Oct 27 14:25:07.662: INFO: Wrong image for pod: daemon-set-ms8tt. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. +Oct 27 14:25:08.661: INFO: Pod daemon-set-9m9rm is not available +Oct 27 14:25:08.661: INFO: Wrong image for pod: daemon-set-ms8tt. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. +Oct 27 14:25:11.661: INFO: Pod daemon-set-nz978 is not available +STEP: Check that daemon pods are still running on every node of the cluster. +Oct 27 14:25:11.705: INFO: Number of nodes with available pods: 1 +Oct 27 14:25:11.705: INFO: Node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 is running more than one daemon pod +Oct 27 14:25:12.739: INFO: Number of nodes with available pods: 1 +Oct 27 14:25:12.739: INFO: Node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 is running more than one daemon pod +Oct 27 14:25:13.739: INFO: Number of nodes with available pods: 2 +Oct 27 14:25:13.739: INFO: Number of running nodes: 2, number of available pods: 2 +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:108 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6893, will wait for the garbage collector to delete the pods +Oct 27 14:25:13.870: INFO: Deleting DaemonSet.extensions daemon-set took: 13.470493ms +Oct 27 14:25:13.970: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.211641ms +Oct 27 14:25:16.282: INFO: Number of nodes with available pods: 0 +Oct 27 14:25:16.282: INFO: Number of running nodes: 0, number of available pods: 0 +Oct 27 14:25:16.294: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"15750"},"items":null} + +Oct 27 14:25:16.305: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"15750"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:25:16.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-6893" for this suite. +•{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":346,"completed":68,"skipped":1044,"failed":0} +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl replace + should update a single-container pod's image [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:25:16.375: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-9796 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[BeforeEach] Kubectl replace + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1558 +[It] should update a single-container pod's image [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 +Oct 27 14:25:16.562: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9796 run e2e-test-httpd-pod --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 --pod-running-timeout=2m0s --labels=run=e2e-test-httpd-pod' +Oct 27 14:25:16.679: INFO: stderr: "" +Oct 27 14:25:16.679: INFO: stdout: "pod/e2e-test-httpd-pod created\n" +STEP: verifying the pod e2e-test-httpd-pod is running +STEP: verifying the pod e2e-test-httpd-pod was created +Oct 27 14:25:21.730: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9796 get pod e2e-test-httpd-pod -o json' +Oct 27 14:25:21.826: INFO: stderr: "" +Oct 27 14:25:21.826: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"annotations\": {\n \"cni.projectcalico.org/containerID\": \"fe321481fd290accb58ccb37f770408e2b8713433c5c6df152f66f425260353c\",\n \"cni.projectcalico.org/podIP\": \"100.96.1.82/32\",\n \"cni.projectcalico.org/podIPs\": \"100.96.1.82/32\",\n \"kubernetes.io/psp\": \"e2e-test-privileged-psp\"\n },\n \"creationTimestamp\": \"2021-10-27T14:25:16Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-9796\",\n \"resourceVersion\": \"15774\",\n \"uid\": \"49351a01-42ff-4644-baa3-f0683e0ff3f2\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"env\": [\n {\n \"name\": \"KUBERNETES_SERVICE_HOST\",\n \"value\": \"api.tmgxs-skc.it.internal.staging.k8s.ondemand.com\"\n }\n ],\n \"image\": \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"kube-api-access-ffnrh\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"kube-api-access-ffnrh\",\n \"projected\": {\n \"defaultMode\": 420,\n \"sources\": [\n {\n \"serviceAccountToken\": {\n \"expirationSeconds\": 3607,\n \"path\": \"token\"\n }\n },\n {\n \"configMap\": {\n \"items\": [\n {\n \"key\": \"ca.crt\",\n \"path\": \"ca.crt\"\n }\n ],\n \"name\": \"kube-root-ca.crt\"\n }\n },\n {\n \"downwardAPI\": {\n \"items\": [\n {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"metadata.namespace\"\n },\n \"path\": \"namespace\"\n }\n ]\n }\n }\n ]\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-10-27T14:25:16Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-10-27T14:25:19Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-10-27T14:25:19Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-10-27T14:25:16Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://089b4debe2848d900a96c99c7627d6f523cdb2a888e62b48e9d72dbe5cfa819c\",\n \"image\": \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\",\n \"imageID\": \"k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2021-10-27T14:25:18Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.250.0.4\",\n \"phase\": \"Running\",\n \"podIP\": \"100.96.1.82\",\n \"podIPs\": [\n {\n \"ip\": \"100.96.1.82\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2021-10-27T14:25:16Z\"\n }\n}\n" +STEP: replace the image in the pod +Oct 27 14:25:21.827: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9796 replace -f -' +Oct 27 14:25:22.055: INFO: stderr: "" +Oct 27 14:25:22.055: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" +STEP: verifying the pod e2e-test-httpd-pod has the right image k8s.gcr.io/e2e-test-images/busybox:1.29-1 +[AfterEach] Kubectl replace + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562 +Oct 27 14:25:22.067: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9796 delete pods e2e-test-httpd-pod' +Oct 27 14:25:24.208: INFO: stderr: "" +Oct 27 14:25:24.208: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:25:24.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-9796" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":346,"completed":69,"skipped":1061,"failed":0} +SSSSSSSS +------------------------------ +[sig-node] ConfigMap + should be consumable via the environment [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:25:24.243: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-44 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable via the environment [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap configmap-44/configmap-test-a48eef1e-456e-4640-80fd-0a380bdfcdd5 +STEP: Creating a pod to test consume configMaps +Oct 27 14:25:24.458: INFO: Waiting up to 5m0s for pod "pod-configmaps-6d5cc223-7aad-42f7-b39c-ccf5b2466a57" in namespace "configmap-44" to be "Succeeded or Failed" +Oct 27 14:25:24.469: INFO: Pod "pod-configmaps-6d5cc223-7aad-42f7-b39c-ccf5b2466a57": Phase="Pending", Reason="", readiness=false. Elapsed: 10.921434ms +Oct 27 14:25:26.482: INFO: Pod "pod-configmaps-6d5cc223-7aad-42f7-b39c-ccf5b2466a57": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.023492933s +STEP: Saw pod success +Oct 27 14:25:26.482: INFO: Pod "pod-configmaps-6d5cc223-7aad-42f7-b39c-ccf5b2466a57" satisfied condition "Succeeded or Failed" +Oct 27 14:25:26.493: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod pod-configmaps-6d5cc223-7aad-42f7-b39c-ccf5b2466a57 container env-test: +STEP: delete the pod +Oct 27 14:25:26.561: INFO: Waiting for pod pod-configmaps-6d5cc223-7aad-42f7-b39c-ccf5b2466a57 to disappear +Oct 27 14:25:26.573: INFO: Pod pod-configmaps-6d5cc223-7aad-42f7-b39c-ccf5b2466a57 no longer exists +[AfterEach] [sig-node] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:25:26.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-44" for this suite. +•{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":346,"completed":70,"skipped":1069,"failed":0} +SSSSSSS +------------------------------ +[sig-apps] CronJob + should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:25:26.606: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename cronjob +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in cronjob-2880 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a ForbidConcurrent cronjob +STEP: Ensuring a job is scheduled +STEP: Ensuring exactly one is scheduled +STEP: Ensuring exactly one running job exists by listing jobs explicitly +STEP: Ensuring no more jobs are scheduled +STEP: Removing cronjob +[AfterEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:31:00.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "cronjob-2880" for this suite. + +• [SLOW TEST:334.297 seconds] +[sig-apps] CronJob +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","total":346,"completed":71,"skipped":1076,"failed":0} +S +------------------------------ +[sig-node] Pods + should support remote command execution over websockets [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:31:00.904: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename pods +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-5134 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:188 +[It] should support remote command execution over websockets [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:31:01.115: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: creating the pod +STEP: submitting the pod to kubernetes +Oct 27 14:31:01.157: INFO: The status of Pod pod-exec-websocket-68220b09-9853-451f-899c-0594e874e190 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:31:03.169: INFO: The status of Pod pod-exec-websocket-68220b09-9853-451f-899c-0594e874e190 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:31:05.170: INFO: The status of Pod pod-exec-websocket-68220b09-9853-451f-899c-0594e874e190 is Running (Ready = true) +[AfterEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:31:05.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-5134" for this suite. +•{"msg":"PASSED [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":346,"completed":72,"skipped":1077,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + should be able to convert from CR v1 to CR v2 [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:31:05.506: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename crd-webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-webhook-2767 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 +STEP: Setting up server cert +STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication +STEP: Deploying the custom resource conversion webhook pod +STEP: Wait for the deployment to be ready +Oct 27 14:31:06.238: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941866, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941866, loc:(*time.Location)(0xa09bc80)}}, Reason:"NewReplicaSetCreated", Message:"Created new replica set \"sample-crd-conversion-webhook-deployment-697cdbd8f4\""}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941866, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941866, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 27 14:31:09.270: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 +[It] should be able to convert from CR v1 to CR v2 [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:31:09.282: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Creating a v1 custom resource +STEP: v2 custom resource should be converted +[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:31:12.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-webhook-2767" for this suite. +[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 +•{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":346,"completed":73,"skipped":1093,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Discovery + should validate PreferredVersion for each APIGroup [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Discovery + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:31:12.471: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename discovery +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in discovery-9041 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] Discovery + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/discovery.go:39 +STEP: Setting up server cert +[It] should validate PreferredVersion for each APIGroup [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:31:13.200: INFO: Checking APIGroup: apiregistration.k8s.io +Oct 27 14:31:13.210: INFO: PreferredVersion.GroupVersion: apiregistration.k8s.io/v1 +Oct 27 14:31:13.210: INFO: Versions found [{apiregistration.k8s.io/v1 v1}] +Oct 27 14:31:13.210: INFO: apiregistration.k8s.io/v1 matches apiregistration.k8s.io/v1 +Oct 27 14:31:13.210: INFO: Checking APIGroup: apps +Oct 27 14:31:13.220: INFO: PreferredVersion.GroupVersion: apps/v1 +Oct 27 14:31:13.220: INFO: Versions found [{apps/v1 v1}] +Oct 27 14:31:13.220: INFO: apps/v1 matches apps/v1 +Oct 27 14:31:13.220: INFO: Checking APIGroup: events.k8s.io +Oct 27 14:31:13.234: INFO: PreferredVersion.GroupVersion: events.k8s.io/v1 +Oct 27 14:31:13.234: INFO: Versions found [{events.k8s.io/v1 v1} {events.k8s.io/v1beta1 v1beta1}] +Oct 27 14:31:13.234: INFO: events.k8s.io/v1 matches events.k8s.io/v1 +Oct 27 14:31:13.234: INFO: Checking APIGroup: authentication.k8s.io +Oct 27 14:31:13.244: INFO: PreferredVersion.GroupVersion: authentication.k8s.io/v1 +Oct 27 14:31:13.244: INFO: Versions found [{authentication.k8s.io/v1 v1}] +Oct 27 14:31:13.244: INFO: authentication.k8s.io/v1 matches authentication.k8s.io/v1 +Oct 27 14:31:13.244: INFO: Checking APIGroup: authorization.k8s.io +Oct 27 14:31:13.253: INFO: PreferredVersion.GroupVersion: authorization.k8s.io/v1 +Oct 27 14:31:13.253: INFO: Versions found [{authorization.k8s.io/v1 v1}] +Oct 27 14:31:13.253: INFO: authorization.k8s.io/v1 matches authorization.k8s.io/v1 +Oct 27 14:31:13.253: INFO: Checking APIGroup: autoscaling +Oct 27 14:31:13.263: INFO: PreferredVersion.GroupVersion: autoscaling/v1 +Oct 27 14:31:13.263: INFO: Versions found [{autoscaling/v1 v1} {autoscaling/v2beta1 v2beta1} {autoscaling/v2beta2 v2beta2}] +Oct 27 14:31:13.263: INFO: autoscaling/v1 matches autoscaling/v1 +Oct 27 14:31:13.263: INFO: Checking APIGroup: batch +Oct 27 14:31:13.273: INFO: PreferredVersion.GroupVersion: batch/v1 +Oct 27 14:31:13.273: INFO: Versions found [{batch/v1 v1} {batch/v1beta1 v1beta1}] +Oct 27 14:31:13.273: INFO: batch/v1 matches batch/v1 +Oct 27 14:31:13.273: INFO: Checking APIGroup: certificates.k8s.io +Oct 27 14:31:13.283: INFO: PreferredVersion.GroupVersion: certificates.k8s.io/v1 +Oct 27 14:31:13.283: INFO: Versions found [{certificates.k8s.io/v1 v1}] +Oct 27 14:31:13.283: INFO: certificates.k8s.io/v1 matches certificates.k8s.io/v1 +Oct 27 14:31:13.283: INFO: Checking APIGroup: networking.k8s.io +Oct 27 14:31:13.293: INFO: PreferredVersion.GroupVersion: networking.k8s.io/v1 +Oct 27 14:31:13.293: INFO: Versions found [{networking.k8s.io/v1 v1}] +Oct 27 14:31:13.293: INFO: networking.k8s.io/v1 matches networking.k8s.io/v1 +Oct 27 14:31:13.293: INFO: Checking APIGroup: policy +Oct 27 14:31:13.303: INFO: PreferredVersion.GroupVersion: policy/v1 +Oct 27 14:31:13.303: INFO: Versions found [{policy/v1 v1} {policy/v1beta1 v1beta1}] +Oct 27 14:31:13.303: INFO: policy/v1 matches policy/v1 +Oct 27 14:31:13.303: INFO: Checking APIGroup: rbac.authorization.k8s.io +Oct 27 14:31:13.313: INFO: PreferredVersion.GroupVersion: rbac.authorization.k8s.io/v1 +Oct 27 14:31:13.313: INFO: Versions found [{rbac.authorization.k8s.io/v1 v1}] +Oct 27 14:31:13.313: INFO: rbac.authorization.k8s.io/v1 matches rbac.authorization.k8s.io/v1 +Oct 27 14:31:13.313: INFO: Checking APIGroup: storage.k8s.io +Oct 27 14:31:13.322: INFO: PreferredVersion.GroupVersion: storage.k8s.io/v1 +Oct 27 14:31:13.322: INFO: Versions found [{storage.k8s.io/v1 v1} {storage.k8s.io/v1beta1 v1beta1}] +Oct 27 14:31:13.322: INFO: storage.k8s.io/v1 matches storage.k8s.io/v1 +Oct 27 14:31:13.322: INFO: Checking APIGroup: admissionregistration.k8s.io +Oct 27 14:31:13.332: INFO: PreferredVersion.GroupVersion: admissionregistration.k8s.io/v1 +Oct 27 14:31:13.332: INFO: Versions found [{admissionregistration.k8s.io/v1 v1}] +Oct 27 14:31:13.332: INFO: admissionregistration.k8s.io/v1 matches admissionregistration.k8s.io/v1 +Oct 27 14:31:13.332: INFO: Checking APIGroup: apiextensions.k8s.io +Oct 27 14:31:13.342: INFO: PreferredVersion.GroupVersion: apiextensions.k8s.io/v1 +Oct 27 14:31:13.342: INFO: Versions found [{apiextensions.k8s.io/v1 v1}] +Oct 27 14:31:13.342: INFO: apiextensions.k8s.io/v1 matches apiextensions.k8s.io/v1 +Oct 27 14:31:13.342: INFO: Checking APIGroup: scheduling.k8s.io +Oct 27 14:31:13.351: INFO: PreferredVersion.GroupVersion: scheduling.k8s.io/v1 +Oct 27 14:31:13.352: INFO: Versions found [{scheduling.k8s.io/v1 v1}] +Oct 27 14:31:13.352: INFO: scheduling.k8s.io/v1 matches scheduling.k8s.io/v1 +Oct 27 14:31:13.352: INFO: Checking APIGroup: coordination.k8s.io +Oct 27 14:31:13.361: INFO: PreferredVersion.GroupVersion: coordination.k8s.io/v1 +Oct 27 14:31:13.361: INFO: Versions found [{coordination.k8s.io/v1 v1}] +Oct 27 14:31:13.361: INFO: coordination.k8s.io/v1 matches coordination.k8s.io/v1 +Oct 27 14:31:13.361: INFO: Checking APIGroup: node.k8s.io +Oct 27 14:31:13.371: INFO: PreferredVersion.GroupVersion: node.k8s.io/v1 +Oct 27 14:31:13.371: INFO: Versions found [{node.k8s.io/v1 v1} {node.k8s.io/v1beta1 v1beta1}] +Oct 27 14:31:13.371: INFO: node.k8s.io/v1 matches node.k8s.io/v1 +Oct 27 14:31:13.371: INFO: Checking APIGroup: discovery.k8s.io +Oct 27 14:31:13.381: INFO: PreferredVersion.GroupVersion: discovery.k8s.io/v1 +Oct 27 14:31:13.381: INFO: Versions found [{discovery.k8s.io/v1 v1} {discovery.k8s.io/v1beta1 v1beta1}] +Oct 27 14:31:13.381: INFO: discovery.k8s.io/v1 matches discovery.k8s.io/v1 +Oct 27 14:31:13.381: INFO: Checking APIGroup: flowcontrol.apiserver.k8s.io +Oct 27 14:31:13.391: INFO: PreferredVersion.GroupVersion: flowcontrol.apiserver.k8s.io/v1beta1 +Oct 27 14:31:13.391: INFO: Versions found [{flowcontrol.apiserver.k8s.io/v1beta1 v1beta1}] +Oct 27 14:31:13.391: INFO: flowcontrol.apiserver.k8s.io/v1beta1 matches flowcontrol.apiserver.k8s.io/v1beta1 +Oct 27 14:31:13.391: INFO: Checking APIGroup: autoscaling.k8s.io +Oct 27 14:31:13.411: INFO: PreferredVersion.GroupVersion: autoscaling.k8s.io/v1 +Oct 27 14:31:13.411: INFO: Versions found [{autoscaling.k8s.io/v1 v1} {autoscaling.k8s.io/v1beta2 v1beta2}] +Oct 27 14:31:13.411: INFO: autoscaling.k8s.io/v1 matches autoscaling.k8s.io/v1 +Oct 27 14:31:13.411: INFO: Checking APIGroup: crd.projectcalico.org +Oct 27 14:31:13.420: INFO: PreferredVersion.GroupVersion: crd.projectcalico.org/v1 +Oct 27 14:31:13.420: INFO: Versions found [{crd.projectcalico.org/v1 v1}] +Oct 27 14:31:13.420: INFO: crd.projectcalico.org/v1 matches crd.projectcalico.org/v1 +Oct 27 14:31:13.420: INFO: Checking APIGroup: cert.gardener.cloud +Oct 27 14:31:13.433: INFO: PreferredVersion.GroupVersion: cert.gardener.cloud/v1alpha1 +Oct 27 14:31:13.433: INFO: Versions found [{cert.gardener.cloud/v1alpha1 v1alpha1}] +Oct 27 14:31:13.433: INFO: cert.gardener.cloud/v1alpha1 matches cert.gardener.cloud/v1alpha1 +Oct 27 14:31:13.433: INFO: Checking APIGroup: dns.gardener.cloud +Oct 27 14:31:13.442: INFO: PreferredVersion.GroupVersion: dns.gardener.cloud/v1alpha1 +Oct 27 14:31:13.442: INFO: Versions found [{dns.gardener.cloud/v1alpha1 v1alpha1}] +Oct 27 14:31:13.443: INFO: dns.gardener.cloud/v1alpha1 matches dns.gardener.cloud/v1alpha1 +Oct 27 14:31:13.443: INFO: Checking APIGroup: snapshot.storage.k8s.io +Oct 27 14:31:13.453: INFO: PreferredVersion.GroupVersion: snapshot.storage.k8s.io/v1beta1 +Oct 27 14:31:13.453: INFO: Versions found [{snapshot.storage.k8s.io/v1beta1 v1beta1}] +Oct 27 14:31:13.453: INFO: snapshot.storage.k8s.io/v1beta1 matches snapshot.storage.k8s.io/v1beta1 +Oct 27 14:31:13.453: INFO: Checking APIGroup: metrics.k8s.io +Oct 27 14:31:13.463: INFO: PreferredVersion.GroupVersion: metrics.k8s.io/v1beta1 +Oct 27 14:31:13.463: INFO: Versions found [{metrics.k8s.io/v1beta1 v1beta1}] +Oct 27 14:31:13.463: INFO: metrics.k8s.io/v1beta1 matches metrics.k8s.io/v1beta1 +[AfterEach] [sig-api-machinery] Discovery + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:31:13.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "discovery-9041" for this suite. +•{"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":346,"completed":74,"skipped":1153,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:31:13.512: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-1205 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0644 on tmpfs +Oct 27 14:31:13.980: INFO: Waiting up to 5m0s for pod "pod-b128978a-d84a-4984-8e70-36e2d0c883d7" in namespace "emptydir-1205" to be "Succeeded or Failed" +Oct 27 14:31:13.990: INFO: Pod "pod-b128978a-d84a-4984-8e70-36e2d0c883d7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.585516ms +Oct 27 14:31:16.002: INFO: Pod "pod-b128978a-d84a-4984-8e70-36e2d0c883d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02274089s +Oct 27 14:31:18.015: INFO: Pod "pod-b128978a-d84a-4984-8e70-36e2d0c883d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035475299s +STEP: Saw pod success +Oct 27 14:31:18.015: INFO: Pod "pod-b128978a-d84a-4984-8e70-36e2d0c883d7" satisfied condition "Succeeded or Failed" +Oct 27 14:31:18.026: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod pod-b128978a-d84a-4984-8e70-36e2d0c883d7 container test-container: +STEP: delete the pod +Oct 27 14:31:18.139: INFO: Waiting for pod pod-b128978a-d84a-4984-8e70-36e2d0c883d7 to disappear +Oct 27 14:31:18.150: INFO: Pod pod-b128978a-d84a-4984-8e70-36e2d0c883d7 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:31:18.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-1205" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":75,"skipped":1196,"failed":0} + +------------------------------ +[sig-node] Secrets + should fail to create secret due to empty secret key [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:31:18.182: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-1458 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should fail to create secret due to empty secret key [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating projection with secret that has name secret-emptykey-test-b768d49e-7fa6-4fc1-8d54-0445e5629228 +[AfterEach] [sig-node] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:31:18.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-1458" for this suite. +•{"msg":"PASSED [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]","total":346,"completed":76,"skipped":1196,"failed":0} +S +------------------------------ +[sig-apps] CronJob + should support CronJob API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:31:18.404: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename cronjob +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in cronjob-8913 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support CronJob API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a cronjob +STEP: creating +STEP: getting +STEP: listing +STEP: watching +Oct 27 14:31:18.619: INFO: starting watch +STEP: cluster-wide listing +STEP: cluster-wide watching +Oct 27 14:31:18.640: INFO: starting watch +STEP: patching +STEP: updating +Oct 27 14:31:18.713: INFO: waiting for watch events with expected annotations +Oct 27 14:31:18.713: INFO: saw patched and updated annotations +STEP: patching /status +STEP: updating /status +STEP: get /status +STEP: deleting +STEP: deleting a collection +[AfterEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:31:18.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "cronjob-8913" for this suite. +•{"msg":"PASSED [sig-apps] CronJob should support CronJob API operations [Conformance]","total":346,"completed":77,"skipped":1197,"failed":0} +SSSSS +------------------------------ +[sig-node] Sysctls [LinuxOnly] [NodeConformance] + should support sysctls [MinimumKubeletVersion:1.21] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:36 +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:31:18.855: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename sysctl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sysctl-242 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:65 +[It] should support sysctls [MinimumKubeletVersion:1.21] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod with the kernel.shm_rmid_forced sysctl +STEP: Watching for error events or started pod +STEP: Waiting for pod completion +STEP: Checking that the pod succeeded +STEP: Getting logs from the pod +STEP: Checking that the sysctl is actually updated +[AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:31:23.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sysctl-242" for this suite. +•{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":346,"completed":78,"skipped":1202,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Job + should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Job + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:31:23.191: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename job +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in job-6580 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a job +STEP: Ensuring job reaches completions +[AfterEach] [sig-apps] Job + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:31:31.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "job-6580" for this suite. +•{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":346,"completed":79,"skipped":1282,"failed":0} +SSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:31:31.437: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-9365 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0777 on node default medium +Oct 27 14:31:31.643: INFO: Waiting up to 5m0s for pod "pod-f1e45482-e225-4900-880e-565cf9e05386" in namespace "emptydir-9365" to be "Succeeded or Failed" +Oct 27 14:31:31.655: INFO: Pod "pod-f1e45482-e225-4900-880e-565cf9e05386": Phase="Pending", Reason="", readiness=false. Elapsed: 11.354254ms +Oct 27 14:31:33.667: INFO: Pod "pod-f1e45482-e225-4900-880e-565cf9e05386": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023203911s +Oct 27 14:31:35.678: INFO: Pod "pod-f1e45482-e225-4900-880e-565cf9e05386": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034977482s +STEP: Saw pod success +Oct 27 14:31:35.678: INFO: Pod "pod-f1e45482-e225-4900-880e-565cf9e05386" satisfied condition "Succeeded or Failed" +Oct 27 14:31:35.689: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod pod-f1e45482-e225-4900-880e-565cf9e05386 container test-container: +STEP: delete the pod +Oct 27 14:31:35.801: INFO: Waiting for pod pod-f1e45482-e225-4900-880e-565cf9e05386 to disappear +Oct 27 14:31:35.813: INFO: Pod pod-f1e45482-e225-4900-880e-565cf9e05386 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:31:35.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-9365" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":80,"skipped":1288,"failed":0} +SS +------------------------------ +[sig-api-machinery] Garbage collector + should not be blocked by dependency circle [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:31:35.847: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename gc +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-7775 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not be blocked by dependency circle [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:31:36.094: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"e6d50847-4ddc-4424-b542-d434e1e61e00", Controller:(*bool)(0xc00511be06), BlockOwnerDeletion:(*bool)(0xc00511be07)}} +Oct 27 14:31:36.106: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"9e94df08-a06e-492f-92c6-36ea61e74eab", Controller:(*bool)(0xc0026bf46e), BlockOwnerDeletion:(*bool)(0xc0026bf46f)}} +Oct 27 14:31:36.119: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"8ceb4554-9598-49a7-974a-90c51b072b69", Controller:(*bool)(0xc001bc11be), BlockOwnerDeletion:(*bool)(0xc001bc11bf)}} +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:31:41.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-7775" for this suite. +•{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":346,"completed":81,"skipped":1290,"failed":0} + +------------------------------ +[sig-api-machinery] Aggregator + Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Aggregator + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:31:41.182: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename aggregator +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in aggregator-1009 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] Aggregator + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:77 +Oct 27 14:31:41.393: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +[It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Registering the sample API server. +Oct 27 14:31:41.962: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941901, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941901, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941901, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941901, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 27 14:31:43.975: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941901, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941901, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941901, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941901, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 27 14:31:45.977: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941901, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941901, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941901, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941901, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 27 14:31:47.975: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941901, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941901, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941901, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941901, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 27 14:31:49.974: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941901, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941901, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941901, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941901, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 27 14:31:51.976: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941901, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941901, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941901, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941901, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 27 14:31:53.982: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941901, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941901, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941901, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941901, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 27 14:31:55.975: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941901, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941901, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941901, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941901, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 27 14:31:57.975: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941901, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941901, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941901, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941901, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 27 14:32:04.722: INFO: Waited 4.734334528s for the sample-apiserver to be ready to handle requests. +STEP: Read Status for v1alpha1.wardle.example.com +STEP: kubectl patch apiservice v1alpha1.wardle.example.com -p '{"spec":{"versionPriority": 400}}' +STEP: List APIServices +Oct 27 14:32:05.304: INFO: Found v1alpha1.wardle.example.com in APIServiceList +[AfterEach] [sig-api-machinery] Aggregator + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:68 +[AfterEach] [sig-api-machinery] Aggregator + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:32:05.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "aggregator-1009" for this suite. +•{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":346,"completed":82,"skipped":1290,"failed":0} +SSS +------------------------------ +[sig-node] Container Runtime blackbox test on terminated container + should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:32:05.849: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-runtime +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-1910 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the container +STEP: wait for the container to reach Succeeded +STEP: get the container status +STEP: the container should be terminated +STEP: the termination message should be set +Oct 27 14:32:09.448: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- +STEP: delete the container +[AfterEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:32:09.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-runtime-1910" for this suite. +•{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":346,"completed":83,"skipped":1293,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl api-versions + should check if v1 is in available api versions [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:32:09.513: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-4765 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should check if v1 is in available api versions [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: validating api versions +Oct 27 14:32:09.694: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-4765 api-versions' +Oct 27 14:32:09.813: INFO: stderr: "" +Oct 27 14:32:09.813: INFO: stdout: "admissionregistration.k8s.io/v1\napiextensions.k8s.io/v1\napiregistration.k8s.io/v1\napps/v1\nauthentication.k8s.io/v1\nauthorization.k8s.io/v1\nautoscaling.k8s.io/v1\nautoscaling.k8s.io/v1beta2\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncert.gardener.cloud/v1alpha1\ncertificates.k8s.io/v1\ncoordination.k8s.io/v1\ncrd.projectcalico.org/v1\ndiscovery.k8s.io/v1\ndiscovery.k8s.io/v1beta1\ndns.gardener.cloud/v1alpha1\nevents.k8s.io/v1\nevents.k8s.io/v1beta1\nflowcontrol.apiserver.k8s.io/v1beta1\nmetrics.k8s.io/v1beta1\nnetworking.k8s.io/v1\nnode.k8s.io/v1\nnode.k8s.io/v1beta1\npolicy/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nscheduling.k8s.io/v1\nsnapshot.storage.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:32:09.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-4765" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":346,"completed":84,"skipped":1337,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + listing validating webhooks should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:32:09.838: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-3648 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 27 14:32:10.756: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941930, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941930, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941930, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941930, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 27 14:32:13.789: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] listing validating webhooks should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Listing all of the created validation webhooks +STEP: Creating a configMap that does not comply to the validation webhook rules +STEP: Deleting the collection of validation webhooks +STEP: Creating a configMap that does not comply to the validation webhook rules +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:32:14.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-3648" for this suite. +STEP: Destroying namespace "webhook-3648-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":346,"completed":85,"skipped":1361,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Secrets + should be consumable via the environment [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:32:14.450: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-590 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable via the environment [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating secret secrets-590/secret-test-28499793-9ff4-4333-b043-eb8bfcb1bcfb +STEP: Creating a pod to test consume secrets +Oct 27 14:32:14.664: INFO: Waiting up to 5m0s for pod "pod-configmaps-678ac092-3444-4a88-bcd6-2a9cd06f0d09" in namespace "secrets-590" to be "Succeeded or Failed" +Oct 27 14:32:14.675: INFO: Pod "pod-configmaps-678ac092-3444-4a88-bcd6-2a9cd06f0d09": Phase="Pending", Reason="", readiness=false. Elapsed: 11.411313ms +Oct 27 14:32:16.687: INFO: Pod "pod-configmaps-678ac092-3444-4a88-bcd6-2a9cd06f0d09": Phase="Running", Reason="", readiness=true. Elapsed: 2.023023527s +Oct 27 14:32:18.699: INFO: Pod "pod-configmaps-678ac092-3444-4a88-bcd6-2a9cd06f0d09": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035922293s +STEP: Saw pod success +Oct 27 14:32:18.700: INFO: Pod "pod-configmaps-678ac092-3444-4a88-bcd6-2a9cd06f0d09" satisfied condition "Succeeded or Failed" +Oct 27 14:32:18.712: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod pod-configmaps-678ac092-3444-4a88-bcd6-2a9cd06f0d09 container env-test: +STEP: delete the pod +Oct 27 14:32:18.783: INFO: Waiting for pod pod-configmaps-678ac092-3444-4a88-bcd6-2a9cd06f0d09 to disappear +Oct 27 14:32:18.794: INFO: Pod pod-configmaps-678ac092-3444-4a88-bcd6-2a9cd06f0d09 no longer exists +[AfterEach] [sig-node] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:32:18.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-590" for this suite. +•{"msg":"PASSED [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":346,"completed":86,"skipped":1384,"failed":0} +SSSSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should delete pods created by rc when not orphaning [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:32:18.827: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename gc +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-616 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should delete pods created by rc when not orphaning [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the rc +STEP: delete the rc +STEP: wait for all pods to be garbage collected +STEP: Gathering metrics +Oct 27 14:32:29.111: INFO: For apiserver_request_total: +For apiserver_request_latency_seconds: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:32:29.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +W1027 14:32:29.111699 5768 metrics_grabber.go:151] Can't find kube-controller-manager pod. Grabbing metrics from kube-controller-manager is disabled. +STEP: Destroying namespace "gc-616" for this suite. +•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":346,"completed":87,"skipped":1393,"failed":0} +SSSSS +------------------------------ +[sig-node] Variable Expansion + should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:32:29.137: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename var-expansion +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in var-expansion-8812 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:32:31.373: INFO: Deleting pod "var-expansion-3bf43277-4131-4e71-b3ca-12ff95775e1a" in namespace "var-expansion-8812" +Oct 27 14:32:31.387: INFO: Wait up to 5m0s for pod "var-expansion-3bf43277-4131-4e71-b3ca-12ff95775e1a" to be fully deleted +[AfterEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:32:35.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-8812" for this suite. +•{"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]","total":346,"completed":88,"skipped":1398,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should orphan pods created by rc if delete options say so [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:32:35.444: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename gc +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-3190 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should orphan pods created by rc if delete options say so [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the rc +STEP: delete the rc +STEP: wait for the rc to be deleted +STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods +STEP: Gathering metrics +Oct 27 14:33:15.740: INFO: For apiserver_request_total: +For apiserver_request_latency_seconds: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +Oct 27 14:33:15.740: INFO: Deleting pod "simpletest.rc-7j69r" in namespace "gc-3190" +W1027 14:33:15.740773 5768 metrics_grabber.go:151] Can't find kube-controller-manager pod. Grabbing metrics from kube-controller-manager is disabled. +Oct 27 14:33:15.758: INFO: Deleting pod "simpletest.rc-7jdd2" in namespace "gc-3190" +Oct 27 14:33:15.776: INFO: Deleting pod "simpletest.rc-h4t25" in namespace "gc-3190" +Oct 27 14:33:15.792: INFO: Deleting pod "simpletest.rc-hl62b" in namespace "gc-3190" +Oct 27 14:33:15.806: INFO: Deleting pod "simpletest.rc-hmbd9" in namespace "gc-3190" +Oct 27 14:33:15.824: INFO: Deleting pod "simpletest.rc-k5v52" in namespace "gc-3190" +Oct 27 14:33:15.841: INFO: Deleting pod "simpletest.rc-kd8gr" in namespace "gc-3190" +Oct 27 14:33:15.858: INFO: Deleting pod "simpletest.rc-qjhj4" in namespace "gc-3190" +Oct 27 14:33:15.876: INFO: Deleting pod "simpletest.rc-wlnjj" in namespace "gc-3190" +Oct 27 14:33:15.894: INFO: Deleting pod "simpletest.rc-xwzwc" in namespace "gc-3190" +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:33:15.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-3190" for this suite. +•{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":346,"completed":89,"skipped":1410,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Proxy version v1 + should proxy through a service and a pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] version v1 + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:33:15.937: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename proxy +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in proxy-3176 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should proxy through a service and a pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: starting an echo server on multiple ports +STEP: creating replication controller proxy-service-h5nbf in namespace proxy-3176 +I1027 14:33:16.153983 5768 runners.go:190] Created replication controller with name: proxy-service-h5nbf, namespace: proxy-3176, replica count: 1 +I1027 14:33:17.205247 5768 runners.go:190] proxy-service-h5nbf Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I1027 14:33:18.205425 5768 runners.go:190] proxy-service-h5nbf Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I1027 14:33:19.205657 5768 runners.go:190] proxy-service-h5nbf Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I1027 14:33:20.206662 5768 runners.go:190] proxy-service-h5nbf Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I1027 14:33:21.207029 5768 runners.go:190] proxy-service-h5nbf Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Oct 27 14:33:21.218: INFO: setup took 5.095615677s, starting test cases +STEP: running 16 cases, 20 attempts per case, 320 total attempts +Oct 27 14:33:21.315: INFO: (0) /api/v1/namespaces/proxy-3176/pods/http:proxy-service-h5nbf-d7ttf:162/proxy/: bar (200; 96.865636ms) +Oct 27 14:33:21.315: INFO: (0) /api/v1/namespaces/proxy-3176/pods/proxy-service-h5nbf-d7ttf:1080/proxy/: test<... (200; 96.884ms) +Oct 27 14:33:21.318: INFO: (0) /api/v1/namespaces/proxy-3176/pods/proxy-service-h5nbf-d7ttf:160/proxy/: foo (200; 99.933719ms) +Oct 27 14:33:21.345: INFO: (0) /api/v1/namespaces/proxy-3176/pods/http:proxy-service-h5nbf-d7ttf:1080/proxy/: ... (200; 127.539267ms) +Oct 27 14:33:21.345: INFO: (0) /api/v1/namespaces/proxy-3176/pods/proxy-service-h5nbf-d7ttf:162/proxy/: bar (200; 127.510479ms) +Oct 27 14:33:21.345: INFO: (0) /api/v1/namespaces/proxy-3176/services/https:proxy-service-h5nbf:tlsportname1/proxy/: tls baz (200; 127.68828ms) +Oct 27 14:33:21.345: INFO: (0) /api/v1/namespaces/proxy-3176/pods/https:proxy-service-h5nbf-d7ttf:460/proxy/: tls baz (200; 127.488691ms) +Oct 27 14:33:21.346: INFO: (0) /api/v1/namespaces/proxy-3176/services/proxy-service-h5nbf:portname1/proxy/: foo (200; 127.595137ms) +Oct 27 14:33:21.346: INFO: (0) /api/v1/namespaces/proxy-3176/pods/http:proxy-service-h5nbf-d7ttf:160/proxy/: foo (200; 127.805423ms) +Oct 27 14:33:21.346: INFO: (0) /api/v1/namespaces/proxy-3176/pods/proxy-service-h5nbf-d7ttf/proxy/: test (200; 127.73371ms) +Oct 27 14:33:21.346: INFO: (0) /api/v1/namespaces/proxy-3176/services/proxy-service-h5nbf:portname2/proxy/: bar (200; 127.869957ms) +Oct 27 14:33:21.351: INFO: (0) /api/v1/namespaces/proxy-3176/services/http:proxy-service-h5nbf:portname2/proxy/: bar (200; 133.462684ms) +Oct 27 14:33:21.371: INFO: (0) /api/v1/namespaces/proxy-3176/pods/https:proxy-service-h5nbf-d7ttf:462/proxy/: tls qux (200; 152.768244ms) +Oct 27 14:33:21.371: INFO: (0) /api/v1/namespaces/proxy-3176/services/http:proxy-service-h5nbf:portname1/proxy/: foo (200; 152.803287ms) +Oct 27 14:33:21.371: INFO: (0) /api/v1/namespaces/proxy-3176/services/https:proxy-service-h5nbf:tlsportname2/proxy/: tls qux (200; 153.035993ms) +Oct 27 14:33:21.455: INFO: (0) /api/v1/namespaces/proxy-3176/pods/https:proxy-service-h5nbf-d7ttf:443/proxy/: test<... (200; 31.638202ms) +Oct 27 14:33:21.487: INFO: (1) /api/v1/namespaces/proxy-3176/pods/proxy-service-h5nbf-d7ttf/proxy/: test (200; 31.76928ms) +Oct 27 14:33:21.487: INFO: (1) /api/v1/namespaces/proxy-3176/services/proxy-service-h5nbf:portname1/proxy/: foo (200; 31.773664ms) +Oct 27 14:33:21.487: INFO: (1) /api/v1/namespaces/proxy-3176/pods/http:proxy-service-h5nbf-d7ttf:160/proxy/: foo (200; 31.625572ms) +Oct 27 14:33:21.504: INFO: (1) /api/v1/namespaces/proxy-3176/pods/proxy-service-h5nbf-d7ttf:162/proxy/: bar (200; 48.675229ms) +Oct 27 14:33:21.504: INFO: (1) /api/v1/namespaces/proxy-3176/pods/http:proxy-service-h5nbf-d7ttf:162/proxy/: bar (200; 48.632155ms) +Oct 27 14:33:21.504: INFO: (1) /api/v1/namespaces/proxy-3176/pods/https:proxy-service-h5nbf-d7ttf:460/proxy/: tls baz (200; 48.588132ms) +Oct 27 14:33:21.505: INFO: (1) /api/v1/namespaces/proxy-3176/pods/http:proxy-service-h5nbf-d7ttf:1080/proxy/: ... (200; 49.924172ms) +Oct 27 14:33:21.505: INFO: (1) /api/v1/namespaces/proxy-3176/pods/https:proxy-service-h5nbf-d7ttf:443/proxy/: test (200; 32.14236ms) +Oct 27 14:33:21.579: INFO: (2) /api/v1/namespaces/proxy-3176/pods/http:proxy-service-h5nbf-d7ttf:162/proxy/: bar (200; 39.163373ms) +Oct 27 14:33:21.579: INFO: (2) /api/v1/namespaces/proxy-3176/pods/proxy-service-h5nbf-d7ttf:160/proxy/: foo (200; 39.282461ms) +Oct 27 14:33:21.579: INFO: (2) /api/v1/namespaces/proxy-3176/pods/http:proxy-service-h5nbf-d7ttf:1080/proxy/: ... (200; 39.147468ms) +Oct 27 14:33:21.579: INFO: (2) /api/v1/namespaces/proxy-3176/services/https:proxy-service-h5nbf:tlsportname1/proxy/: tls baz (200; 39.119647ms) +Oct 27 14:33:21.579: INFO: (2) /api/v1/namespaces/proxy-3176/pods/https:proxy-service-h5nbf-d7ttf:460/proxy/: tls baz (200; 39.165979ms) +Oct 27 14:33:21.579: INFO: (2) /api/v1/namespaces/proxy-3176/pods/https:proxy-service-h5nbf-d7ttf:462/proxy/: tls qux (200; 39.099383ms) +Oct 27 14:33:21.610: INFO: (2) /api/v1/namespaces/proxy-3176/pods/proxy-service-h5nbf-d7ttf:1080/proxy/: test<... (200; 70.472708ms) +Oct 27 14:33:21.610: INFO: (2) /api/v1/namespaces/proxy-3176/services/https:proxy-service-h5nbf:tlsportname2/proxy/: tls qux (200; 70.751345ms) +Oct 27 14:33:21.611: INFO: (2) /api/v1/namespaces/proxy-3176/services/http:proxy-service-h5nbf:portname1/proxy/: foo (200; 70.936494ms) +Oct 27 14:33:21.611: INFO: (2) /api/v1/namespaces/proxy-3176/services/proxy-service-h5nbf:portname2/proxy/: bar (200; 70.957585ms) +Oct 27 14:33:21.611: INFO: (2) /api/v1/namespaces/proxy-3176/services/proxy-service-h5nbf:portname1/proxy/: foo (200; 71.194059ms) +Oct 27 14:33:21.611: INFO: (2) /api/v1/namespaces/proxy-3176/services/http:proxy-service-h5nbf:portname2/proxy/: bar (200; 71.232065ms) +Oct 27 14:33:21.646: INFO: (3) /api/v1/namespaces/proxy-3176/pods/proxy-service-h5nbf-d7ttf:1080/proxy/: test<... (200; 35.158052ms) +Oct 27 14:33:21.646: INFO: (3) /api/v1/namespaces/proxy-3176/services/http:proxy-service-h5nbf:portname2/proxy/: bar (200; 35.23069ms) +Oct 27 14:33:21.646: INFO: (3) /api/v1/namespaces/proxy-3176/pods/https:proxy-service-h5nbf-d7ttf:460/proxy/: tls baz (200; 35.312347ms) +Oct 27 14:33:21.646: INFO: (3) /api/v1/namespaces/proxy-3176/pods/http:proxy-service-h5nbf-d7ttf:160/proxy/: foo (200; 35.204438ms) +Oct 27 14:33:21.646: INFO: (3) /api/v1/namespaces/proxy-3176/services/https:proxy-service-h5nbf:tlsportname1/proxy/: tls baz (200; 35.193279ms) +Oct 27 14:33:21.646: INFO: (3) /api/v1/namespaces/proxy-3176/services/proxy-service-h5nbf:portname2/proxy/: bar (200; 35.224233ms) +Oct 27 14:33:21.646: INFO: (3) /api/v1/namespaces/proxy-3176/pods/http:proxy-service-h5nbf-d7ttf:1080/proxy/: ... (200; 35.191942ms) +Oct 27 14:33:21.646: INFO: (3) /api/v1/namespaces/proxy-3176/services/proxy-service-h5nbf:portname1/proxy/: foo (200; 35.073107ms) +Oct 27 14:33:21.646: INFO: (3) /api/v1/namespaces/proxy-3176/pods/proxy-service-h5nbf-d7ttf/proxy/: test (200; 35.168837ms) +Oct 27 14:33:21.662: INFO: (3) /api/v1/namespaces/proxy-3176/pods/https:proxy-service-h5nbf-d7ttf:462/proxy/: tls qux (200; 50.706038ms) +Oct 27 14:33:21.662: INFO: (3) /api/v1/namespaces/proxy-3176/pods/https:proxy-service-h5nbf-d7ttf:443/proxy/: test (200; 31.310966ms) +Oct 27 14:33:21.730: INFO: (4) /api/v1/namespaces/proxy-3176/pods/http:proxy-service-h5nbf-d7ttf:160/proxy/: foo (200; 31.315062ms) +Oct 27 14:33:21.735: INFO: (4) /api/v1/namespaces/proxy-3176/pods/https:proxy-service-h5nbf-d7ttf:460/proxy/: tls baz (200; 35.491832ms) +Oct 27 14:33:21.735: INFO: (4) /api/v1/namespaces/proxy-3176/pods/https:proxy-service-h5nbf-d7ttf:462/proxy/: tls qux (200; 35.675387ms) +Oct 27 14:33:21.735: INFO: (4) /api/v1/namespaces/proxy-3176/pods/http:proxy-service-h5nbf-d7ttf:162/proxy/: bar (200; 35.692065ms) +Oct 27 14:33:21.735: INFO: (4) /api/v1/namespaces/proxy-3176/pods/http:proxy-service-h5nbf-d7ttf:1080/proxy/: ... (200; 35.581695ms) +Oct 27 14:33:21.735: INFO: (4) /api/v1/namespaces/proxy-3176/pods/proxy-service-h5nbf-d7ttf:162/proxy/: bar (200; 35.66625ms) +Oct 27 14:33:21.750: INFO: (4) /api/v1/namespaces/proxy-3176/pods/https:proxy-service-h5nbf-d7ttf:443/proxy/: test<... (200; 50.631766ms) +Oct 27 14:33:21.750: INFO: (4) /api/v1/namespaces/proxy-3176/services/https:proxy-service-h5nbf:tlsportname2/proxy/: tls qux (200; 50.678044ms) +Oct 27 14:33:21.750: INFO: (4) /api/v1/namespaces/proxy-3176/services/https:proxy-service-h5nbf:tlsportname1/proxy/: tls baz (200; 50.840458ms) +Oct 27 14:33:21.752: INFO: (4) /api/v1/namespaces/proxy-3176/services/proxy-service-h5nbf:portname1/proxy/: foo (200; 53.198694ms) +Oct 27 14:33:21.752: INFO: (4) /api/v1/namespaces/proxy-3176/services/http:proxy-service-h5nbf:portname1/proxy/: foo (200; 53.211597ms) +Oct 27 14:33:21.770: INFO: (4) /api/v1/namespaces/proxy-3176/services/http:proxy-service-h5nbf:portname2/proxy/: bar (200; 71.301839ms) +Oct 27 14:33:21.770: INFO: (4) /api/v1/namespaces/proxy-3176/services/proxy-service-h5nbf:portname2/proxy/: bar (200; 71.391806ms) +Oct 27 14:33:21.804: INFO: (5) /api/v1/namespaces/proxy-3176/pods/http:proxy-service-h5nbf-d7ttf:160/proxy/: foo (200; 33.611739ms) +Oct 27 14:33:21.804: INFO: (5) /api/v1/namespaces/proxy-3176/pods/proxy-service-h5nbf-d7ttf/proxy/: test (200; 33.49456ms) +Oct 27 14:33:21.804: INFO: (5) /api/v1/namespaces/proxy-3176/pods/proxy-service-h5nbf-d7ttf:162/proxy/: bar (200; 33.439064ms) +Oct 27 14:33:21.804: INFO: (5) /api/v1/namespaces/proxy-3176/pods/proxy-service-h5nbf-d7ttf:1080/proxy/: test<... (200; 33.589109ms) +Oct 27 14:33:21.806: INFO: (5) /api/v1/namespaces/proxy-3176/pods/https:proxy-service-h5nbf-d7ttf:462/proxy/: tls qux (200; 35.499416ms) +Oct 27 14:33:21.806: INFO: (5) /api/v1/namespaces/proxy-3176/pods/http:proxy-service-h5nbf-d7ttf:162/proxy/: bar (200; 35.599065ms) +Oct 27 14:33:21.806: INFO: (5) /api/v1/namespaces/proxy-3176/pods/https:proxy-service-h5nbf-d7ttf:460/proxy/: tls baz (200; 35.653034ms) +Oct 27 14:33:21.806: INFO: (5) /api/v1/namespaces/proxy-3176/pods/https:proxy-service-h5nbf-d7ttf:443/proxy/: ... (200; 53.151563ms) +Oct 27 14:33:21.824: INFO: (5) /api/v1/namespaces/proxy-3176/services/https:proxy-service-h5nbf:tlsportname2/proxy/: tls qux (200; 53.239281ms) +Oct 27 14:33:21.824: INFO: (5) /api/v1/namespaces/proxy-3176/pods/proxy-service-h5nbf-d7ttf:160/proxy/: foo (200; 53.025976ms) +Oct 27 14:33:21.824: INFO: (5) /api/v1/namespaces/proxy-3176/services/https:proxy-service-h5nbf:tlsportname1/proxy/: tls baz (200; 53.153358ms) +Oct 27 14:33:21.842: INFO: (5) /api/v1/namespaces/proxy-3176/services/http:proxy-service-h5nbf:portname2/proxy/: bar (200; 71.314733ms) +Oct 27 14:33:21.861: INFO: (5) /api/v1/namespaces/proxy-3176/services/proxy-service-h5nbf:portname2/proxy/: bar (200; 89.827847ms) +Oct 27 14:33:21.878: INFO: (5) /api/v1/namespaces/proxy-3176/services/proxy-service-h5nbf:portname1/proxy/: foo (200; 107.498905ms) +Oct 27 14:33:21.911: INFO: (6) /api/v1/namespaces/proxy-3176/pods/http:proxy-service-h5nbf-d7ttf:160/proxy/: foo (200; 32.152051ms) +Oct 27 14:33:21.911: INFO: (6) /api/v1/namespaces/proxy-3176/pods/https:proxy-service-h5nbf-d7ttf:443/proxy/: test (200; 32.462445ms) +Oct 27 14:33:21.915: INFO: (6) /api/v1/namespaces/proxy-3176/pods/proxy-service-h5nbf-d7ttf:1080/proxy/: test<... (200; 36.115069ms) +Oct 27 14:33:21.915: INFO: (6) /api/v1/namespaces/proxy-3176/services/https:proxy-service-h5nbf:tlsportname2/proxy/: tls qux (200; 36.137889ms) +Oct 27 14:33:21.915: INFO: (6) /api/v1/namespaces/proxy-3176/pods/proxy-service-h5nbf-d7ttf:162/proxy/: bar (200; 36.093079ms) +Oct 27 14:33:21.915: INFO: (6) /api/v1/namespaces/proxy-3176/services/https:proxy-service-h5nbf:tlsportname1/proxy/: tls baz (200; 36.274659ms) +Oct 27 14:33:21.917: INFO: (6) /api/v1/namespaces/proxy-3176/pods/http:proxy-service-h5nbf-d7ttf:162/proxy/: bar (200; 38.407242ms) +Oct 27 14:33:21.932: INFO: (6) /api/v1/namespaces/proxy-3176/pods/https:proxy-service-h5nbf-d7ttf:460/proxy/: tls baz (200; 53.741074ms) +Oct 27 14:33:21.932: INFO: (6) /api/v1/namespaces/proxy-3176/services/http:proxy-service-h5nbf:portname1/proxy/: foo (200; 53.903522ms) +Oct 27 14:33:21.932: INFO: (6) /api/v1/namespaces/proxy-3176/pods/http:proxy-service-h5nbf-d7ttf:1080/proxy/: ... (200; 53.755525ms) +Oct 27 14:33:21.950: INFO: (6) /api/v1/namespaces/proxy-3176/services/http:proxy-service-h5nbf:portname2/proxy/: bar (200; 71.848015ms) +Oct 27 14:33:21.950: INFO: (6) /api/v1/namespaces/proxy-3176/services/proxy-service-h5nbf:portname1/proxy/: foo (200; 71.814111ms) +Oct 27 14:33:21.950: INFO: (6) /api/v1/namespaces/proxy-3176/services/proxy-service-h5nbf:portname2/proxy/: bar (200; 71.833813ms) +Oct 27 14:33:21.987: INFO: (6) /api/v1/namespaces/proxy-3176/pods/proxy-service-h5nbf-d7ttf:160/proxy/: foo (200; 108.268977ms) +Oct 27 14:33:22.019: INFO: (7) /api/v1/namespaces/proxy-3176/pods/proxy-service-h5nbf-d7ttf:1080/proxy/: test<... (200; 32.441284ms) +Oct 27 14:33:22.019: INFO: (7) /api/v1/namespaces/proxy-3176/pods/https:proxy-service-h5nbf-d7ttf:462/proxy/: tls qux (200; 32.336423ms) +Oct 27 14:33:22.019: INFO: (7) /api/v1/namespaces/proxy-3176/pods/proxy-service-h5nbf-d7ttf:160/proxy/: foo (200; 32.394306ms) +Oct 27 14:33:22.020: INFO: (7) /api/v1/namespaces/proxy-3176/pods/http:proxy-service-h5nbf-d7ttf:162/proxy/: bar (200; 32.385401ms) +Oct 27 14:33:22.022: INFO: (7) /api/v1/namespaces/proxy-3176/pods/proxy-service-h5nbf-d7ttf:162/proxy/: bar (200; 35.280344ms) +Oct 27 14:33:22.022: INFO: (7) /api/v1/namespaces/proxy-3176/pods/http:proxy-service-h5nbf-d7ttf:160/proxy/: foo (200; 35.198246ms) +Oct 27 14:33:22.022: INFO: (7) /api/v1/namespaces/proxy-3176/pods/http:proxy-service-h5nbf-d7ttf:1080/proxy/: ... (200; 35.442277ms) +Oct 27 14:33:22.039: INFO: (7) /api/v1/namespaces/proxy-3176/services/https:proxy-service-h5nbf:tlsportname1/proxy/: tls baz (200; 52.249542ms) +Oct 27 14:33:22.039: INFO: (7) /api/v1/namespaces/proxy-3176/pods/proxy-service-h5nbf-d7ttf/proxy/: test (200; 52.255378ms) +Oct 27 14:33:22.039: INFO: (7) /api/v1/namespaces/proxy-3176/pods/https:proxy-service-h5nbf-d7ttf:460/proxy/: tls baz (200; 52.356725ms) +Oct 27 14:33:22.039: INFO: (7) /api/v1/namespaces/proxy-3176/services/http:proxy-service-h5nbf:portname1/proxy/: foo (200; 52.33263ms) +Oct 27 14:33:22.040: INFO: (7) /api/v1/namespaces/proxy-3176/pods/https:proxy-service-h5nbf-d7ttf:443/proxy/: ... (200; 32.971651ms) +Oct 27 14:33:22.109: INFO: (8) /api/v1/namespaces/proxy-3176/pods/http:proxy-service-h5nbf-d7ttf:162/proxy/: bar (200; 33.06753ms) +Oct 27 14:33:22.109: INFO: (8) /api/v1/namespaces/proxy-3176/pods/http:proxy-service-h5nbf-d7ttf:160/proxy/: foo (200; 32.979398ms) +Oct 27 14:33:22.109: INFO: (8) /api/v1/namespaces/proxy-3176/services/proxy-service-h5nbf:portname1/proxy/: foo (200; 33.1015ms) +Oct 27 14:33:22.109: INFO: (8) /api/v1/namespaces/proxy-3176/pods/proxy-service-h5nbf-d7ttf:162/proxy/: bar (200; 33.066733ms) +Oct 27 14:33:22.111: INFO: (8) /api/v1/namespaces/proxy-3176/pods/https:proxy-service-h5nbf-d7ttf:443/proxy/: test (200; 52.621093ms) +Oct 27 14:33:22.129: INFO: (8) /api/v1/namespaces/proxy-3176/pods/proxy-service-h5nbf-d7ttf:1080/proxy/: test<... (200; 52.837613ms) +Oct 27 14:33:22.129: INFO: (8) /api/v1/namespaces/proxy-3176/services/https:proxy-service-h5nbf:tlsportname2/proxy/: tls qux (200; 52.864497ms) +Oct 27 14:33:22.147: INFO: (8) /api/v1/namespaces/proxy-3176/pods/proxy-service-h5nbf-d7ttf:160/proxy/: foo (200; 70.661757ms) +Oct 27 14:33:22.147: INFO: (8) /api/v1/namespaces/proxy-3176/services/http:proxy-service-h5nbf:portname2/proxy/: bar (200; 70.601314ms) +Oct 27 14:33:22.147: INFO: (8) /api/v1/namespaces/proxy-3176/services/http:proxy-service-h5nbf:portname1/proxy/: foo (200; 70.792789ms) +Oct 27 14:33:22.147: INFO: (8) /api/v1/namespaces/proxy-3176/services/proxy-service-h5nbf:portname2/proxy/: bar (200; 70.731564ms) +Oct 27 14:33:22.180: INFO: (9) /api/v1/namespaces/proxy-3176/services/http:proxy-service-h5nbf:portname1/proxy/: foo (200; 32.47057ms) +Oct 27 14:33:22.180: INFO: (9) /api/v1/namespaces/proxy-3176/pods/proxy-service-h5nbf-d7ttf:162/proxy/: bar (200; 32.672475ms) +Oct 27 14:33:22.180: INFO: (9) /api/v1/namespaces/proxy-3176/pods/proxy-service-h5nbf-d7ttf/proxy/: test (200; 32.543712ms) +Oct 27 14:33:22.180: INFO: (9) /api/v1/namespaces/proxy-3176/pods/proxy-service-h5nbf-d7ttf:1080/proxy/: test<... (200; 32.557403ms) +Oct 27 14:33:22.180: INFO: (9) /api/v1/namespaces/proxy-3176/services/proxy-service-h5nbf:portname1/proxy/: foo (200; 32.648814ms) +Oct 27 14:33:22.180: INFO: (9) /api/v1/namespaces/proxy-3176/pods/https:proxy-service-h5nbf-d7ttf:460/proxy/: tls baz (200; 32.775207ms) +Oct 27 14:33:22.180: INFO: (9) /api/v1/namespaces/proxy-3176/services/https:proxy-service-h5nbf:tlsportname1/proxy/: tls baz (200; 32.757815ms) +Oct 27 14:33:22.180: INFO: (9) /api/v1/namespaces/proxy-3176/pods/https:proxy-service-h5nbf-d7ttf:462/proxy/: tls qux (200; 32.705407ms) +Oct 27 14:33:22.180: INFO: (9) /api/v1/namespaces/proxy-3176/services/proxy-service-h5nbf:portname2/proxy/: bar (200; 32.856221ms) +Oct 27 14:33:22.180: INFO: (9) /api/v1/namespaces/proxy-3176/pods/http:proxy-service-h5nbf-d7ttf:1080/proxy/: ... (200; 32.974641ms) +Oct 27 14:33:22.180: INFO: (9) /api/v1/namespaces/proxy-3176/services/https:proxy-service-h5nbf:tlsportname2/proxy/: tls qux (200; 32.92342ms) +Oct 27 14:33:22.197: INFO: (9) /api/v1/namespaces/proxy-3176/pods/https:proxy-service-h5nbf-d7ttf:443/proxy/: ... (200; 37.8816ms) +Oct 27 14:33:22.271: INFO: (10) /api/v1/namespaces/proxy-3176/pods/proxy-service-h5nbf-d7ttf:162/proxy/: bar (200; 37.942115ms) +Oct 27 14:33:22.271: INFO: (10) /api/v1/namespaces/proxy-3176/pods/proxy-service-h5nbf-d7ttf/proxy/: test (200; 37.911651ms) +Oct 27 14:33:22.287: INFO: (10) /api/v1/namespaces/proxy-3176/services/http:proxy-service-h5nbf:portname1/proxy/: foo (200; 53.437348ms) +Oct 27 14:33:22.287: INFO: (10) /api/v1/namespaces/proxy-3176/pods/proxy-service-h5nbf-d7ttf:1080/proxy/: test<... (200; 53.403825ms) +Oct 27 14:33:22.287: INFO: (10) /api/v1/namespaces/proxy-3176/pods/https:proxy-service-h5nbf-d7ttf:462/proxy/: tls qux (200; 53.480811ms) +Oct 27 14:33:22.287: INFO: (10) /api/v1/namespaces/proxy-3176/services/https:proxy-service-h5nbf:tlsportname2/proxy/: tls qux (200; 53.443536ms) +Oct 27 14:33:22.287: INFO: (10) /api/v1/namespaces/proxy-3176/services/https:proxy-service-h5nbf:tlsportname1/proxy/: tls baz (200; 53.488795ms) +Oct 27 14:33:22.288: INFO: (10) /api/v1/namespaces/proxy-3176/services/http:proxy-service-h5nbf:portname2/proxy/: bar (200; 55.188682ms) +Oct 27 14:33:22.288: INFO: (10) /api/v1/namespaces/proxy-3176/pods/proxy-service-h5nbf-d7ttf:160/proxy/: foo (200; 55.152268ms) +Oct 27 14:33:22.324: INFO: (10) /api/v1/namespaces/proxy-3176/services/proxy-service-h5nbf:portname2/proxy/: bar (200; 90.958843ms) +Oct 27 14:33:22.324: INFO: (10) /api/v1/namespaces/proxy-3176/services/proxy-service-h5nbf:portname1/proxy/: foo (200; 90.878284ms) +Oct 27 14:33:22.357: INFO: (11) /api/v1/namespaces/proxy-3176/pods/http:proxy-service-h5nbf-d7ttf:1080/proxy/: ... (200; 32.70909ms) +Oct 27 14:33:22.357: INFO: (11) /api/v1/namespaces/proxy-3176/pods/proxy-service-h5nbf-d7ttf:162/proxy/: bar (200; 32.627795ms) +Oct 27 14:33:22.357: INFO: (11) /api/v1/namespaces/proxy-3176/pods/https:proxy-service-h5nbf-d7ttf:462/proxy/: tls qux (200; 32.730568ms) +Oct 27 14:33:22.357: INFO: (11) /api/v1/namespaces/proxy-3176/pods/http:proxy-service-h5nbf-d7ttf:162/proxy/: bar (200; 32.637517ms) +Oct 27 14:33:22.357: INFO: (11) /api/v1/namespaces/proxy-3176/pods/https:proxy-service-h5nbf-d7ttf:443/proxy/: test<... (200; 35.751838ms) +Oct 27 14:33:22.360: INFO: (11) /api/v1/namespaces/proxy-3176/pods/proxy-service-h5nbf-d7ttf:160/proxy/: foo (200; 35.956353ms) +Oct 27 14:33:22.360: INFO: (11) /api/v1/namespaces/proxy-3176/pods/http:proxy-service-h5nbf-d7ttf:160/proxy/: foo (200; 36.079981ms) +Oct 27 14:33:22.378: INFO: (11) /api/v1/namespaces/proxy-3176/services/https:proxy-service-h5nbf:tlsportname1/proxy/: tls baz (200; 53.795278ms) +Oct 27 14:33:22.378: INFO: (11) /api/v1/namespaces/proxy-3176/services/https:proxy-service-h5nbf:tlsportname2/proxy/: tls qux (200; 53.845864ms) +Oct 27 14:33:22.378: INFO: (11) /api/v1/namespaces/proxy-3176/pods/proxy-service-h5nbf-d7ttf/proxy/: test (200; 53.970767ms) +Oct 27 14:33:22.378: INFO: (11) /api/v1/namespaces/proxy-3176/pods/https:proxy-service-h5nbf-d7ttf:460/proxy/: tls baz (200; 53.842795ms) +Oct 27 14:33:22.396: INFO: (11) /api/v1/namespaces/proxy-3176/services/proxy-service-h5nbf:portname2/proxy/: bar (200; 71.721199ms) +Oct 27 14:33:22.396: INFO: (11) /api/v1/namespaces/proxy-3176/services/http:proxy-service-h5nbf:portname2/proxy/: bar (200; 71.744077ms) +Oct 27 14:33:22.414: INFO: (11) /api/v1/namespaces/proxy-3176/services/proxy-service-h5nbf:portname1/proxy/: foo (200; 89.399149ms) +Oct 27 14:33:22.414: INFO: (11) /api/v1/namespaces/proxy-3176/services/http:proxy-service-h5nbf:portname1/proxy/: foo (200; 89.196042ms) +Oct 27 14:33:22.447: INFO: (12) /api/v1/namespaces/proxy-3176/pods/proxy-service-h5nbf-d7ttf:162/proxy/: bar (200; 32.738053ms) +Oct 27 14:33:22.447: INFO: (12) /api/v1/namespaces/proxy-3176/pods/http:proxy-service-h5nbf-d7ttf:162/proxy/: bar (200; 32.609567ms) +Oct 27 14:33:22.447: INFO: (12) /api/v1/namespaces/proxy-3176/pods/proxy-service-h5nbf-d7ttf:1080/proxy/: test<... (200; 32.599695ms) +Oct 27 14:33:22.450: INFO: (12) /api/v1/namespaces/proxy-3176/pods/http:proxy-service-h5nbf-d7ttf:160/proxy/: foo (200; 35.694973ms) +Oct 27 14:33:22.450: INFO: (12) /api/v1/namespaces/proxy-3176/pods/https:proxy-service-h5nbf-d7ttf:462/proxy/: tls qux (200; 35.806984ms) +Oct 27 14:33:22.450: INFO: (12) /api/v1/namespaces/proxy-3176/pods/http:proxy-service-h5nbf-d7ttf:1080/proxy/: ... (200; 35.851242ms) +Oct 27 14:33:22.450: INFO: (12) /api/v1/namespaces/proxy-3176/pods/proxy-service-h5nbf-d7ttf:160/proxy/: foo (200; 35.750362ms) +Oct 27 14:33:22.466: INFO: (12) /api/v1/namespaces/proxy-3176/pods/https:proxy-service-h5nbf-d7ttf:443/proxy/: test (200; 51.852306ms) +Oct 27 14:33:22.467: INFO: (12) /api/v1/namespaces/proxy-3176/services/http:proxy-service-h5nbf:portname2/proxy/: bar (200; 53.07922ms) +Oct 27 14:33:22.467: INFO: (12) /api/v1/namespaces/proxy-3176/services/proxy-service-h5nbf:portname2/proxy/: bar (200; 53.240204ms) +Oct 27 14:33:22.485: INFO: (12) /api/v1/namespaces/proxy-3176/services/http:proxy-service-h5nbf:portname1/proxy/: foo (200; 70.78124ms) +Oct 27 14:33:22.485: INFO: (12) /api/v1/namespaces/proxy-3176/services/proxy-service-h5nbf:portname1/proxy/: foo (200; 70.898591ms) +Oct 27 14:33:22.518: INFO: (13) /api/v1/namespaces/proxy-3176/pods/http:proxy-service-h5nbf-d7ttf:160/proxy/: foo (200; 32.367532ms) +Oct 27 14:33:22.518: INFO: (13) /api/v1/namespaces/proxy-3176/pods/proxy-service-h5nbf-d7ttf/proxy/: test (200; 32.608163ms) +Oct 27 14:33:22.518: INFO: (13) /api/v1/namespaces/proxy-3176/pods/http:proxy-service-h5nbf-d7ttf:1080/proxy/: ... (200; 32.540657ms) +Oct 27 14:33:22.526: INFO: (13) /api/v1/namespaces/proxy-3176/pods/proxy-service-h5nbf-d7ttf:1080/proxy/: test<... (200; 40.235424ms) +Oct 27 14:33:22.526: INFO: (13) /api/v1/namespaces/proxy-3176/pods/http:proxy-service-h5nbf-d7ttf:162/proxy/: bar (200; 40.743685ms) +Oct 27 14:33:22.526: INFO: (13) /api/v1/namespaces/proxy-3176/pods/https:proxy-service-h5nbf-d7ttf:443/proxy/: test<... (200; 33.127019ms) +Oct 27 14:33:22.634: INFO: (14) /api/v1/namespaces/proxy-3176/pods/http:proxy-service-h5nbf-d7ttf:162/proxy/: bar (200; 32.951184ms) +Oct 27 14:33:22.634: INFO: (14) /api/v1/namespaces/proxy-3176/pods/proxy-service-h5nbf-d7ttf:162/proxy/: bar (200; 32.988658ms) +Oct 27 14:33:22.634: INFO: (14) /api/v1/namespaces/proxy-3176/services/proxy-service-h5nbf:portname1/proxy/: foo (200; 32.996504ms) +Oct 27 14:33:22.636: INFO: (14) /api/v1/namespaces/proxy-3176/services/https:proxy-service-h5nbf:tlsportname2/proxy/: tls qux (200; 35.539931ms) +Oct 27 14:33:22.636: INFO: (14) /api/v1/namespaces/proxy-3176/pods/https:proxy-service-h5nbf-d7ttf:443/proxy/: ... (200; 35.73058ms) +Oct 27 14:33:22.653: INFO: (14) /api/v1/namespaces/proxy-3176/pods/proxy-service-h5nbf-d7ttf/proxy/: test (200; 52.755505ms) +Oct 27 14:33:22.653: INFO: (14) /api/v1/namespaces/proxy-3176/pods/https:proxy-service-h5nbf-d7ttf:462/proxy/: tls qux (200; 53.067822ms) +Oct 27 14:33:22.653: INFO: (14) /api/v1/namespaces/proxy-3176/services/http:proxy-service-h5nbf:portname2/proxy/: bar (200; 52.857499ms) +Oct 27 14:33:22.653: INFO: (14) /api/v1/namespaces/proxy-3176/services/proxy-service-h5nbf:portname2/proxy/: bar (200; 52.855161ms) +Oct 27 14:33:22.653: INFO: (14) /api/v1/namespaces/proxy-3176/pods/http:proxy-service-h5nbf-d7ttf:160/proxy/: foo (200; 52.952186ms) +Oct 27 14:33:22.653: INFO: (14) /api/v1/namespaces/proxy-3176/services/https:proxy-service-h5nbf:tlsportname1/proxy/: tls baz (200; 52.892916ms) +Oct 27 14:33:22.653: INFO: (14) /api/v1/namespaces/proxy-3176/pods/https:proxy-service-h5nbf-d7ttf:460/proxy/: tls baz (200; 53.094719ms) +Oct 27 14:33:22.691: INFO: (14) /api/v1/namespaces/proxy-3176/services/http:proxy-service-h5nbf:portname1/proxy/: foo (200; 89.99203ms) +Oct 27 14:33:22.726: INFO: (15) /api/v1/namespaces/proxy-3176/pods/http:proxy-service-h5nbf-d7ttf:162/proxy/: bar (200; 35.179021ms) +Oct 27 14:33:22.726: INFO: (15) /api/v1/namespaces/proxy-3176/pods/proxy-service-h5nbf-d7ttf:1080/proxy/: test<... (200; 35.196851ms) +Oct 27 14:33:22.726: INFO: (15) /api/v1/namespaces/proxy-3176/pods/proxy-service-h5nbf-d7ttf:162/proxy/: bar (200; 35.34282ms) +Oct 27 14:33:22.727: INFO: (15) /api/v1/namespaces/proxy-3176/pods/proxy-service-h5nbf-d7ttf:160/proxy/: foo (200; 36.207543ms) +Oct 27 14:33:22.727: INFO: (15) /api/v1/namespaces/proxy-3176/pods/proxy-service-h5nbf-d7ttf/proxy/: test (200; 36.312454ms) +Oct 27 14:33:22.727: INFO: (15) /api/v1/namespaces/proxy-3176/pods/https:proxy-service-h5nbf-d7ttf:443/proxy/: ... (200; 36.260749ms) +Oct 27 14:33:22.743: INFO: (15) /api/v1/namespaces/proxy-3176/pods/https:proxy-service-h5nbf-d7ttf:460/proxy/: tls baz (200; 52.333543ms) +Oct 27 14:33:22.748: INFO: (15) /api/v1/namespaces/proxy-3176/services/https:proxy-service-h5nbf:tlsportname1/proxy/: tls baz (200; 56.759153ms) +Oct 27 14:33:22.748: INFO: (15) /api/v1/namespaces/proxy-3176/services/http:proxy-service-h5nbf:portname1/proxy/: foo (200; 56.865432ms) +Oct 27 14:33:22.748: INFO: (15) /api/v1/namespaces/proxy-3176/services/proxy-service-h5nbf:portname2/proxy/: bar (200; 56.842333ms) +Oct 27 14:33:22.748: INFO: (15) /api/v1/namespaces/proxy-3176/services/https:proxy-service-h5nbf:tlsportname2/proxy/: tls qux (200; 57.014581ms) +Oct 27 14:33:22.761: INFO: (15) /api/v1/namespaces/proxy-3176/services/http:proxy-service-h5nbf:portname2/proxy/: bar (200; 70.422323ms) +Oct 27 14:33:22.761: INFO: (15) /api/v1/namespaces/proxy-3176/services/proxy-service-h5nbf:portname1/proxy/: foo (200; 70.386863ms) +Oct 27 14:33:22.794: INFO: (16) /api/v1/namespaces/proxy-3176/pods/proxy-service-h5nbf-d7ttf:162/proxy/: bar (200; 32.030858ms) +Oct 27 14:33:22.794: INFO: (16) /api/v1/namespaces/proxy-3176/pods/https:proxy-service-h5nbf-d7ttf:460/proxy/: tls baz (200; 32.13317ms) +Oct 27 14:33:22.794: INFO: (16) /api/v1/namespaces/proxy-3176/services/https:proxy-service-h5nbf:tlsportname2/proxy/: tls qux (200; 32.332348ms) +Oct 27 14:33:22.794: INFO: (16) /api/v1/namespaces/proxy-3176/pods/https:proxy-service-h5nbf-d7ttf:443/proxy/: test (200; 32.243014ms) +Oct 27 14:33:22.794: INFO: (16) /api/v1/namespaces/proxy-3176/services/http:proxy-service-h5nbf:portname2/proxy/: bar (200; 32.161589ms) +Oct 27 14:33:22.794: INFO: (16) /api/v1/namespaces/proxy-3176/pods/proxy-service-h5nbf-d7ttf:1080/proxy/: test<... (200; 32.248639ms) +Oct 27 14:33:22.794: INFO: (16) /api/v1/namespaces/proxy-3176/pods/http:proxy-service-h5nbf-d7ttf:1080/proxy/: ... (200; 32.463868ms) +Oct 27 14:33:22.800: INFO: (16) /api/v1/namespaces/proxy-3176/pods/http:proxy-service-h5nbf-d7ttf:160/proxy/: foo (200; 38.376576ms) +Oct 27 14:33:22.820: INFO: (16) /api/v1/namespaces/proxy-3176/services/http:proxy-service-h5nbf:portname1/proxy/: foo (200; 58.906818ms) +Oct 27 14:33:22.820: INFO: (16) /api/v1/namespaces/proxy-3176/services/proxy-service-h5nbf:portname2/proxy/: bar (200; 58.991978ms) +Oct 27 14:33:22.838: INFO: (16) /api/v1/namespaces/proxy-3176/pods/http:proxy-service-h5nbf-d7ttf:162/proxy/: bar (200; 75.974505ms) +Oct 27 14:33:22.855: INFO: (16) /api/v1/namespaces/proxy-3176/services/proxy-service-h5nbf:portname1/proxy/: foo (200; 93.667748ms) +Oct 27 14:33:22.891: INFO: (17) /api/v1/namespaces/proxy-3176/pods/http:proxy-service-h5nbf-d7ttf:1080/proxy/: ... (200; 35.885086ms) +Oct 27 14:33:22.891: INFO: (17) /api/v1/namespaces/proxy-3176/pods/http:proxy-service-h5nbf-d7ttf:162/proxy/: bar (200; 35.805798ms) +Oct 27 14:33:22.891: INFO: (17) /api/v1/namespaces/proxy-3176/pods/proxy-service-h5nbf-d7ttf:162/proxy/: bar (200; 35.870288ms) +Oct 27 14:33:22.891: INFO: (17) /api/v1/namespaces/proxy-3176/pods/http:proxy-service-h5nbf-d7ttf:160/proxy/: foo (200; 35.941293ms) +Oct 27 14:33:22.909: INFO: (17) /api/v1/namespaces/proxy-3176/pods/proxy-service-h5nbf-d7ttf/proxy/: test (200; 53.165421ms) +Oct 27 14:33:22.910: INFO: (17) /api/v1/namespaces/proxy-3176/pods/proxy-service-h5nbf-d7ttf:160/proxy/: foo (200; 54.108477ms) +Oct 27 14:33:22.910: INFO: (17) /api/v1/namespaces/proxy-3176/services/https:proxy-service-h5nbf:tlsportname2/proxy/: tls qux (200; 54.245842ms) +Oct 27 14:33:22.910: INFO: (17) /api/v1/namespaces/proxy-3176/services/https:proxy-service-h5nbf:tlsportname1/proxy/: tls baz (200; 54.225203ms) +Oct 27 14:33:22.910: INFO: (17) /api/v1/namespaces/proxy-3176/pods/https:proxy-service-h5nbf-d7ttf:443/proxy/: test<... (200; 54.36648ms) +Oct 27 14:33:22.910: INFO: (17) /api/v1/namespaces/proxy-3176/pods/https:proxy-service-h5nbf-d7ttf:462/proxy/: tls qux (200; 54.349642ms) +Oct 27 14:33:22.910: INFO: (17) /api/v1/namespaces/proxy-3176/pods/https:proxy-service-h5nbf-d7ttf:460/proxy/: tls baz (200; 54.326891ms) +Oct 27 14:33:22.926: INFO: (17) /api/v1/namespaces/proxy-3176/services/http:proxy-service-h5nbf:portname1/proxy/: foo (200; 70.52987ms) +Oct 27 14:33:22.926: INFO: (17) /api/v1/namespaces/proxy-3176/services/proxy-service-h5nbf:portname2/proxy/: bar (200; 70.619998ms) +Oct 27 14:33:22.930: INFO: (17) /api/v1/namespaces/proxy-3176/services/http:proxy-service-h5nbf:portname2/proxy/: bar (200; 74.969639ms) +Oct 27 14:33:22.966: INFO: (17) /api/v1/namespaces/proxy-3176/services/proxy-service-h5nbf:portname1/proxy/: foo (200; 110.686205ms) +Oct 27 14:33:22.999: INFO: (18) /api/v1/namespaces/proxy-3176/pods/http:proxy-service-h5nbf-d7ttf:1080/proxy/: ... (200; 32.446769ms) +Oct 27 14:33:22.999: INFO: (18) /api/v1/namespaces/proxy-3176/pods/proxy-service-h5nbf-d7ttf:1080/proxy/: test<... (200; 32.469533ms) +Oct 27 14:33:22.999: INFO: (18) /api/v1/namespaces/proxy-3176/pods/proxy-service-h5nbf-d7ttf:160/proxy/: foo (200; 32.668055ms) +Oct 27 14:33:22.999: INFO: (18) /api/v1/namespaces/proxy-3176/pods/proxy-service-h5nbf-d7ttf/proxy/: test (200; 32.573ms) +Oct 27 14:33:22.999: INFO: (18) /api/v1/namespaces/proxy-3176/pods/proxy-service-h5nbf-d7ttf:162/proxy/: bar (200; 32.535397ms) +Oct 27 14:33:22.999: INFO: (18) /api/v1/namespaces/proxy-3176/pods/https:proxy-service-h5nbf-d7ttf:443/proxy/: test (200; 32.118493ms) +Oct 27 14:33:23.080: INFO: (19) /api/v1/namespaces/proxy-3176/services/http:proxy-service-h5nbf:portname2/proxy/: bar (200; 36.94284ms) +Oct 27 14:33:23.080: INFO: (19) /api/v1/namespaces/proxy-3176/pods/https:proxy-service-h5nbf-d7ttf:462/proxy/: tls qux (200; 36.981355ms) +Oct 27 14:33:23.080: INFO: (19) /api/v1/namespaces/proxy-3176/pods/https:proxy-service-h5nbf-d7ttf:443/proxy/: ... (200; 55.460167ms) +Oct 27 14:33:23.098: INFO: (19) /api/v1/namespaces/proxy-3176/pods/proxy-service-h5nbf-d7ttf:1080/proxy/: test<... (200; 55.365695ms) +Oct 27 14:33:23.134: INFO: (19) /api/v1/namespaces/proxy-3176/pods/http:proxy-service-h5nbf-d7ttf:160/proxy/: foo (200; 91.701915ms) +Oct 27 14:33:23.134: INFO: (19) /api/v1/namespaces/proxy-3176/pods/http:proxy-service-h5nbf-d7ttf:162/proxy/: bar (200; 91.847888ms) +STEP: deleting ReplicationController proxy-service-h5nbf in namespace proxy-3176, will wait for the garbage collector to delete the pods +Oct 27 14:33:23.212: INFO: Deleting ReplicationController proxy-service-h5nbf took: 14.58295ms +Oct 27 14:33:23.413: INFO: Terminating ReplicationController proxy-service-h5nbf pods took: 200.64823ms +[AfterEach] version v1 + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:33:25.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "proxy-3176" for this suite. +•{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":346,"completed":90,"skipped":1426,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should be able to change the type from ClusterIP to ExternalName [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:33:25.447: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-7388 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should be able to change the type from ClusterIP to ExternalName [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-7388 +STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service +STEP: creating service externalsvc in namespace services-7388 +STEP: creating replication controller externalsvc in namespace services-7388 +I1027 14:33:25.681124 5768 runners.go:190] Created replication controller with name: externalsvc, namespace: services-7388, replica count: 2 +I1027 14:33:28.732590 5768 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +STEP: changing the ClusterIP service to type=ExternalName +Oct 27 14:33:28.776: INFO: Creating new exec pod +Oct 27 14:33:30.820: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-7388 exec execpodfcbdj -- /bin/sh -x -c nslookup clusterip-service.services-7388.svc.cluster.local' +Oct 27 14:33:31.608: INFO: stderr: "+ nslookup clusterip-service.services-7388.svc.cluster.local\n" +Oct 27 14:33:31.608: INFO: stdout: "Server:\t\t100.64.0.10\nAddress:\t100.64.0.10#53\n\nclusterip-service.services-7388.svc.cluster.local\tcanonical name = externalsvc.services-7388.svc.cluster.local.\nName:\texternalsvc.services-7388.svc.cluster.local\nAddress: 100.66.1.218\n\n" +STEP: deleting ReplicationController externalsvc in namespace services-7388, will wait for the garbage collector to delete the pods +Oct 27 14:33:31.682: INFO: Deleting ReplicationController externalsvc took: 12.798415ms +Oct 27 14:33:31.783: INFO: Terminating ReplicationController externalsvc pods took: 101.328115ms +Oct 27 14:33:34.505: INFO: Cleaning up the ClusterIP to ExternalName test service +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:33:34.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-7388" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":346,"completed":91,"skipped":1455,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + pod should support shared volumes between containers [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:33:34.553: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-6158 +STEP: Waiting for a default service account to be provisioned in namespace +[It] pod should support shared volumes between containers [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating Pod +STEP: Reading file content from the nginx-container +Oct 27 14:33:38.782: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-6158 PodName:pod-sharedvolume-23903f18-10c1-49cf-8407-d27c9c524e93 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 14:33:38.782: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 14:33:39.182: INFO: Exec stderr: "" +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:33:39.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-6158" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":346,"completed":92,"skipped":1489,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] DNS + should support configurable pod DNS nameservers [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:33:39.215: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename dns +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-307 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support configurable pod DNS nameservers [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... +Oct 27 14:33:39.425: INFO: Created pod &Pod{ObjectMeta:{test-dns-nameservers dns-307 b6770dbf-f262-4dd5-9b7c-8f965716bd63 19362 0 2021-10-27 14:33:39 +0000 UTC map[] map[kubernetes.io/psp:e2e-test-privileged-psp] [] [] [{e2e.test Update v1 2021-10-27 14:33:39 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-bwh8b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmgxs-skc.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bwh8b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 14:33:39.437: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:33:41.449: INFO: The status of Pod test-dns-nameservers is Running (Ready = true) +STEP: Verifying customized DNS suffix list is configured on pod... +Oct 27 14:33:41.450: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-307 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 14:33:41.450: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Verifying customized DNS server is configured on pod... +Oct 27 14:33:42.046: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-307 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 14:33:42.046: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 14:33:42.519: INFO: Deleting pod test-dns-nameservers... +[AfterEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:33:42.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-307" for this suite. +•{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":346,"completed":93,"skipped":1515,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should update labels on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:33:42.579: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-7304 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should update labels on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating the pod +Oct 27 14:33:42.795: INFO: The status of Pod labelsupdate03d5ee5e-e0d2-45e8-8c86-09e09a0d7dff is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:33:44.808: INFO: The status of Pod labelsupdate03d5ee5e-e0d2-45e8-8c86-09e09a0d7dff is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:33:46.809: INFO: The status of Pod labelsupdate03d5ee5e-e0d2-45e8-8c86-09e09a0d7dff is Running (Ready = true) +Oct 27 14:33:47.396: INFO: Successfully updated pod "labelsupdate03d5ee5e-e0d2-45e8-8c86-09e09a0d7dff" +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:33:49.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-7304" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":346,"completed":94,"skipped":1557,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Watchers + should receive events on concurrent watches in same order [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:33:49.524: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename watch +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in watch-4908 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should receive events on concurrent watches in same order [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: getting a starting resourceVersion +STEP: starting a background goroutine to produce watch events +STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order +[AfterEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:33:55.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "watch-4908" for this suite. +•{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":346,"completed":95,"skipped":1592,"failed":0} +SSSS +------------------------------ +[sig-node] Probing container + should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:33:55.195: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-probe +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-2998 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 +[It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating pod liveness-b7cf4969-010b-41f6-ac06-9bbe0b8150bc in namespace container-probe-2998 +Oct 27 14:33:59.423: INFO: Started pod liveness-b7cf4969-010b-41f6-ac06-9bbe0b8150bc in namespace container-probe-2998 +STEP: checking the pod's current state and verifying that restartCount is present +Oct 27 14:33:59.440: INFO: Initial restart count of pod liveness-b7cf4969-010b-41f6-ac06-9bbe0b8150bc is 0 +STEP: deleting the pod +[AfterEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:38:01.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-2998" for this suite. +•{"msg":"PASSED [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":346,"completed":96,"skipped":1596,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a pod. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:38:01.102: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename resourcequota +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-2258 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should create a ResourceQuota and capture the life of a pod. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Counting existing ResourceQuota +STEP: Creating a ResourceQuota +STEP: Ensuring resource quota status is calculated +STEP: Creating a Pod that fits quota +STEP: Ensuring ResourceQuota status captures the pod usage +STEP: Not allowing a pod to be created that exceeds remaining quota +STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) +STEP: Ensuring a pod cannot update its resource requirements +STEP: Ensuring attempts to update pod resource requirements did not change quota usage +STEP: Deleting the pod +STEP: Ensuring resource quota status released the pod usage +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:38:14.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-2258" for this suite. +•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":346,"completed":97,"skipped":1626,"failed":0} +SSSSSSS +------------------------------ +[sig-node] Downward API + should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:38:14.487: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-2621 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward api env vars +Oct 27 14:38:14.693: INFO: Waiting up to 5m0s for pod "downward-api-ebd93aa3-c5a9-4b75-bca0-f0c622f4dbe4" in namespace "downward-api-2621" to be "Succeeded or Failed" +Oct 27 14:38:14.704: INFO: Pod "downward-api-ebd93aa3-c5a9-4b75-bca0-f0c622f4dbe4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.744753ms +Oct 27 14:38:16.716: INFO: Pod "downward-api-ebd93aa3-c5a9-4b75-bca0-f0c622f4dbe4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022908543s +Oct 27 14:38:18.729: INFO: Pod "downward-api-ebd93aa3-c5a9-4b75-bca0-f0c622f4dbe4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035699487s +STEP: Saw pod success +Oct 27 14:38:18.729: INFO: Pod "downward-api-ebd93aa3-c5a9-4b75-bca0-f0c622f4dbe4" satisfied condition "Succeeded or Failed" +Oct 27 14:38:18.740: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod downward-api-ebd93aa3-c5a9-4b75-bca0-f0c622f4dbe4 container dapi-container: +STEP: delete the pod +Oct 27 14:38:18.828: INFO: Waiting for pod downward-api-ebd93aa3-c5a9-4b75-bca0-f0c622f4dbe4 to disappear +Oct 27 14:38:18.842: INFO: Pod downward-api-ebd93aa3-c5a9-4b75-bca0-f0c622f4dbe4 no longer exists +[AfterEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:38:18.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-2621" for this suite. +•{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":346,"completed":98,"skipped":1633,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should mutate custom resource with different stored version [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:38:18.877: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-1099 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 27 14:38:19.434: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942299, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942299, loc:(*time.Location)(0xa09bc80)}}, Reason:"NewReplicaSetCreated", Message:"Created new replica set \"sample-webhook-deployment-78988fc6cd\""}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942299, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942299, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} +Oct 27 14:38:21.446: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942299, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942299, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942299, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942299, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 27 14:38:24.466: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should mutate custom resource with different stored version [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:38:24.478: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2340-crds.webhook.example.com via the AdmissionRegistration API +STEP: Creating a custom resource while v1 is storage version +STEP: Patching Custom Resource Definition to set v2 as storage +STEP: Patching the custom resource while v2 is storage version +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:38:27.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-1099" for this suite. +STEP: Destroying namespace "webhook-1099-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":346,"completed":99,"skipped":1647,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:38:27.778: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-6178 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 27 14:38:27.982: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c6674008-4d96-4933-ae0c-eb045b5d687b" in namespace "projected-6178" to be "Succeeded or Failed" +Oct 27 14:38:27.994: INFO: Pod "downwardapi-volume-c6674008-4d96-4933-ae0c-eb045b5d687b": Phase="Pending", Reason="", readiness=false. Elapsed: 11.040189ms +Oct 27 14:38:30.005: INFO: Pod "downwardapi-volume-c6674008-4d96-4933-ae0c-eb045b5d687b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022531903s +Oct 27 14:38:32.019: INFO: Pod "downwardapi-volume-c6674008-4d96-4933-ae0c-eb045b5d687b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036071728s +STEP: Saw pod success +Oct 27 14:38:32.019: INFO: Pod "downwardapi-volume-c6674008-4d96-4933-ae0c-eb045b5d687b" satisfied condition "Succeeded or Failed" +Oct 27 14:38:32.031: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod downwardapi-volume-c6674008-4d96-4933-ae0c-eb045b5d687b container client-container: +STEP: delete the pod +Oct 27 14:38:32.102: INFO: Waiting for pod downwardapi-volume-c6674008-4d96-4933-ae0c-eb045b5d687b to disappear +Oct 27 14:38:32.114: INFO: Pod downwardapi-volume-c6674008-4d96-4933-ae0c-eb045b5d687b no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:38:32.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-6178" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":100,"skipped":1689,"failed":0} +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicaSet + should list and delete a collection of ReplicaSets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:38:32.148: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename replicaset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replicaset-3645 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should list and delete a collection of ReplicaSets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Create a ReplicaSet +STEP: Verify that the required pods have come up +Oct 27 14:38:32.366: INFO: Pod name sample-pod: Found 1 pods out of 3 +Oct 27 14:38:37.378: INFO: Pod name sample-pod: Found 3 pods out of 3 +STEP: ensuring each pod is running +Oct 27 14:38:37.388: INFO: Replica Status: {Replicas:3 FullyLabeledReplicas:3 ReadyReplicas:3 AvailableReplicas:3 ObservedGeneration:1 Conditions:[]} +STEP: Listing all ReplicaSets +STEP: DeleteCollection of the ReplicaSets +STEP: After DeleteCollection verify that ReplicaSets have been deleted +[AfterEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:38:37.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replicaset-3645" for this suite. +•{"msg":"PASSED [sig-apps] ReplicaSet should list and delete a collection of ReplicaSets [Conformance]","total":346,"completed":101,"skipped":1707,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should rollback without unnecessary restarts [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:38:37.460: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename daemonsets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in daemonsets-8626 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:142 +[It] should rollback without unnecessary restarts [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:38:37.720: INFO: Create a RollingUpdate DaemonSet +Oct 27 14:38:37.734: INFO: Check that daemon pods launch on every node of the cluster +Oct 27 14:38:37.761: INFO: Number of nodes with available pods: 0 +Oct 27 14:38:37.761: INFO: Node shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8 is running more than one daemon pod +Oct 27 14:38:38.794: INFO: Number of nodes with available pods: 0 +Oct 27 14:38:38.794: INFO: Node shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8 is running more than one daemon pod +Oct 27 14:38:39.795: INFO: Number of nodes with available pods: 0 +Oct 27 14:38:39.806: INFO: Node shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8 is running more than one daemon pod +Oct 27 14:38:40.796: INFO: Number of nodes with available pods: 2 +Oct 27 14:38:40.796: INFO: Number of running nodes: 2, number of available pods: 2 +Oct 27 14:38:40.796: INFO: Update the DaemonSet to trigger a rollout +Oct 27 14:38:40.820: INFO: Updating DaemonSet daemon-set +Oct 27 14:38:43.884: INFO: Roll back the DaemonSet before rollout is complete +Oct 27 14:38:43.912: INFO: Updating DaemonSet daemon-set +Oct 27 14:38:43.912: INFO: Make sure DaemonSet rollback is complete +Oct 27 14:38:48.950: INFO: Pod daemon-set-4qgkx is not available +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:108 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8626, will wait for the garbage collector to delete the pods +Oct 27 14:38:49.068: INFO: Deleting DaemonSet.extensions daemon-set took: 13.015015ms +Oct 27 14:38:49.169: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.684392ms +Oct 27 14:38:50.983: INFO: Number of nodes with available pods: 0 +Oct 27 14:38:50.983: INFO: Number of running nodes: 0, number of available pods: 0 +Oct 27 14:38:50.995: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"21519"},"items":null} + +Oct 27 14:38:51.006: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"21519"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:38:51.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-8626" for this suite. +•{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":346,"completed":102,"skipped":1717,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should update annotations on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:38:51.079: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-850 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should update annotations on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating the pod +Oct 27 14:38:51.297: INFO: The status of Pod annotationupdate1ebccf76-6b13-4dc9-aca4-a044969d3255 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:38:53.310: INFO: The status of Pod annotationupdate1ebccf76-6b13-4dc9-aca4-a044969d3255 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:38:55.310: INFO: The status of Pod annotationupdate1ebccf76-6b13-4dc9-aca4-a044969d3255 is Running (Ready = true) +Oct 27 14:38:55.908: INFO: Successfully updated pod "annotationupdate1ebccf76-6b13-4dc9-aca4-a044969d3255" +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:38:58.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-850" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":346,"completed":103,"skipped":1729,"failed":0} +SS +------------------------------ +[sig-apps] ReplicationController + should serve a basic image on each replica with a public image [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:38:58.051: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename replication-controller +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replication-controller-9867 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 +[It] should serve a basic image on each replica with a public image [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating replication controller my-hostname-basic-af2a22ec-c089-40a0-8ac2-ee41abb03e70 +Oct 27 14:38:58.266: INFO: Pod name my-hostname-basic-af2a22ec-c089-40a0-8ac2-ee41abb03e70: Found 1 pods out of 1 +Oct 27 14:38:58.266: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-af2a22ec-c089-40a0-8ac2-ee41abb03e70" are running +Oct 27 14:39:02.289: INFO: Pod "my-hostname-basic-af2a22ec-c089-40a0-8ac2-ee41abb03e70-w4shh" is running (conditions: []) +Oct 27 14:39:02.289: INFO: Trying to dial the pod +Oct 27 14:39:07.429: INFO: Controller my-hostname-basic-af2a22ec-c089-40a0-8ac2-ee41abb03e70: Got expected result from replica 1 [my-hostname-basic-af2a22ec-c089-40a0-8ac2-ee41abb03e70-w4shh]: "my-hostname-basic-af2a22ec-c089-40a0-8ac2-ee41abb03e70-w4shh", 1 of 1 required successes so far +[AfterEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:39:07.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replication-controller-9867" for this suite. +•{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":346,"completed":104,"skipped":1731,"failed":0} +SSS +------------------------------ +[sig-storage] Projected configMap + should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:39:07.465: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-871 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name projected-configmap-test-volume-4fbb443c-6098-45c6-a7b7-c014dac9edb2 +STEP: Creating a pod to test consume configMaps +Oct 27 14:39:07.688: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-44284e46-29ea-4f9a-a919-fb868fb8dc70" in namespace "projected-871" to be "Succeeded or Failed" +Oct 27 14:39:07.699: INFO: Pod "pod-projected-configmaps-44284e46-29ea-4f9a-a919-fb868fb8dc70": Phase="Pending", Reason="", readiness=false. Elapsed: 11.421048ms +Oct 27 14:39:09.712: INFO: Pod "pod-projected-configmaps-44284e46-29ea-4f9a-a919-fb868fb8dc70": Phase="Running", Reason="", readiness=true. Elapsed: 2.023990066s +Oct 27 14:39:11.723: INFO: Pod "pod-projected-configmaps-44284e46-29ea-4f9a-a919-fb868fb8dc70": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035859627s +STEP: Saw pod success +Oct 27 14:39:11.723: INFO: Pod "pod-projected-configmaps-44284e46-29ea-4f9a-a919-fb868fb8dc70" satisfied condition "Succeeded or Failed" +Oct 27 14:39:11.735: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod pod-projected-configmaps-44284e46-29ea-4f9a-a919-fb868fb8dc70 container projected-configmap-volume-test: +STEP: delete the pod +Oct 27 14:39:11.807: INFO: Waiting for pod pod-projected-configmaps-44284e46-29ea-4f9a-a919-fb868fb8dc70 to disappear +Oct 27 14:39:11.818: INFO: Pod pod-projected-configmaps-44284e46-29ea-4f9a-a919-fb868fb8dc70 no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:39:11.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-871" for this suite. +•{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":346,"completed":105,"skipped":1734,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:39:11.852: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-3058 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0777 on tmpfs +Oct 27 14:39:12.063: INFO: Waiting up to 5m0s for pod "pod-be2b7045-4245-4370-8bb7-6214df178d3f" in namespace "emptydir-3058" to be "Succeeded or Failed" +Oct 27 14:39:12.074: INFO: Pod "pod-be2b7045-4245-4370-8bb7-6214df178d3f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.886623ms +Oct 27 14:39:14.087: INFO: Pod "pod-be2b7045-4245-4370-8bb7-6214df178d3f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023851765s +Oct 27 14:39:16.100: INFO: Pod "pod-be2b7045-4245-4370-8bb7-6214df178d3f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037422227s +STEP: Saw pod success +Oct 27 14:39:16.100: INFO: Pod "pod-be2b7045-4245-4370-8bb7-6214df178d3f" satisfied condition "Succeeded or Failed" +Oct 27 14:39:16.111: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod pod-be2b7045-4245-4370-8bb7-6214df178d3f container test-container: +STEP: delete the pod +Oct 27 14:39:16.179: INFO: Waiting for pod pod-be2b7045-4245-4370-8bb7-6214df178d3f to disappear +Oct 27 14:39:16.190: INFO: Pod pod-be2b7045-4245-4370-8bb7-6214df178d3f no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:39:16.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-3058" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":106,"skipped":1791,"failed":0} +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:39:16.224: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-693 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating projection with secret that has name projected-secret-test-977338c6-1d43-432e-89a8-fbe1266148f1 +STEP: Creating a pod to test consume secrets +Oct 27 14:39:16.440: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e002e517-eb8a-4acb-924d-dbb32acef94b" in namespace "projected-693" to be "Succeeded or Failed" +Oct 27 14:39:16.452: INFO: Pod "pod-projected-secrets-e002e517-eb8a-4acb-924d-dbb32acef94b": Phase="Pending", Reason="", readiness=false. Elapsed: 11.677449ms +Oct 27 14:39:18.464: INFO: Pod "pod-projected-secrets-e002e517-eb8a-4acb-924d-dbb32acef94b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023939735s +Oct 27 14:39:20.477: INFO: Pod "pod-projected-secrets-e002e517-eb8a-4acb-924d-dbb32acef94b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036703866s +STEP: Saw pod success +Oct 27 14:39:20.477: INFO: Pod "pod-projected-secrets-e002e517-eb8a-4acb-924d-dbb32acef94b" satisfied condition "Succeeded or Failed" +Oct 27 14:39:20.488: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod pod-projected-secrets-e002e517-eb8a-4acb-924d-dbb32acef94b container projected-secret-volume-test: +STEP: delete the pod +Oct 27 14:39:20.559: INFO: Waiting for pod pod-projected-secrets-e002e517-eb8a-4acb-924d-dbb32acef94b to disappear +Oct 27 14:39:20.570: INFO: Pod pod-projected-secrets-e002e517-eb8a-4acb-924d-dbb32acef94b no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:39:20.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-693" for this suite. +•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":107,"skipped":1811,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:39:20.604: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-2484 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name configmap-test-volume-235c4943-bca3-4aae-ba78-79a509a69733 +STEP: Creating a pod to test consume configMaps +Oct 27 14:39:20.816: INFO: Waiting up to 5m0s for pod "pod-configmaps-ef704cbb-6094-46b3-83f7-42884a3f43af" in namespace "configmap-2484" to be "Succeeded or Failed" +Oct 27 14:39:20.827: INFO: Pod "pod-configmaps-ef704cbb-6094-46b3-83f7-42884a3f43af": Phase="Pending", Reason="", readiness=false. Elapsed: 10.926492ms +Oct 27 14:39:22.839: INFO: Pod "pod-configmaps-ef704cbb-6094-46b3-83f7-42884a3f43af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022985996s +Oct 27 14:39:24.851: INFO: Pod "pod-configmaps-ef704cbb-6094-46b3-83f7-42884a3f43af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03467621s +STEP: Saw pod success +Oct 27 14:39:24.851: INFO: Pod "pod-configmaps-ef704cbb-6094-46b3-83f7-42884a3f43af" satisfied condition "Succeeded or Failed" +Oct 27 14:39:24.862: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod pod-configmaps-ef704cbb-6094-46b3-83f7-42884a3f43af container agnhost-container: +STEP: delete the pod +Oct 27 14:39:24.932: INFO: Waiting for pod pod-configmaps-ef704cbb-6094-46b3-83f7-42884a3f43af to disappear +Oct 27 14:39:24.943: INFO: Pod pod-configmaps-ef704cbb-6094-46b3-83f7-42884a3f43af no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:39:24.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-2484" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":346,"completed":108,"skipped":1840,"failed":0} +SSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should list and delete a collection of DaemonSets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:39:24.978: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename daemonsets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in daemonsets-685 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:142 +[It] should list and delete a collection of DaemonSets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating simple DaemonSet "daemon-set" +STEP: Check that daemon pods launch on every node of the cluster. +Oct 27 14:39:25.252: INFO: Number of nodes with available pods: 0 +Oct 27 14:39:25.252: INFO: Node shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8 is running more than one daemon pod +Oct 27 14:39:26.286: INFO: Number of nodes with available pods: 0 +Oct 27 14:39:26.286: INFO: Node shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8 is running more than one daemon pod +Oct 27 14:39:27.285: INFO: Number of nodes with available pods: 1 +Oct 27 14:39:27.285: INFO: Node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 is running more than one daemon pod +Oct 27 14:39:28.286: INFO: Number of nodes with available pods: 2 +Oct 27 14:39:28.286: INFO: Number of running nodes: 2, number of available pods: 2 +STEP: listing all DeamonSets +STEP: DeleteCollection of the DaemonSets +STEP: Verify that ReplicaSets have been deleted +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:108 +Oct 27 14:39:28.363: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"21880"},"items":null} + +Oct 27 14:39:28.379: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"21880"},"items":[{"metadata":{"name":"daemon-set-hsr6z","generateName":"daemon-set-","namespace":"daemonsets-685","uid":"40308053-f962-45fb-b92e-8caf862fe92a","resourceVersion":"21878","creationTimestamp":"2021-10-27T14:39:25Z","deletionTimestamp":"2021-10-27T14:39:58Z","deletionGracePeriodSeconds":30,"labels":{"controller-revision-hash":"577749b6b","daemonset-name":"daemon-set","pod-template-generation":"1"},"annotations":{"cni.projectcalico.org/containerID":"62b79e68907a3d0766dad342b652f8f0fce959c572a9f4bd1094e76c14de6e9b","cni.projectcalico.org/podIP":"100.96.1.125/32","cni.projectcalico.org/podIPs":"100.96.1.125/32","kubernetes.io/psp":"e2e-test-privileged-psp"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"fcffa4d4-fc6b-4779-a9df-1976abfc424f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"calico","operation":"Update","apiVersion":"v1","time":"2021-10-27T14:39:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}},"subresource":"status"},{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-10-27T14:39:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fcffa4d4-fc6b-4779-a9df-1976abfc424f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-10-27T14:39:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.1.125\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-h6ntk","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1","ports":[{"containerPort":9376,"protocol":"TCP"}],"env":[{"name":"KUBERNETES_SERVICE_HOST","value":"api.tmgxs-skc.it.internal.staging.k8s.ondemand.com"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-h6ntk","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-10-27T14:39:25Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-10-27T14:39:27Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-10-27T14:39:27Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-10-27T14:39:25Z"}],"hostIP":"10.250.0.4","podIP":"100.96.1.125","podIPs":[{"ip":"100.96.1.125"}],"startTime":"2021-10-27T14:39:25Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2021-10-27T14:39:26Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1","imageID":"k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50","containerID":"containerd://7e9cc4097877b0b49bd7a772d82eddada703fbd48ae5f6f2be30b240600ad829","started":true}],"qosClass":"BestEffort"}},{"metadata":{"name":"daemon-set-zqkwx","generateName":"daemon-set-","namespace":"daemonsets-685","uid":"790700a2-aea6-4295-88d5-41ec7eeb59e7","resourceVersion":"21877","creationTimestamp":"2021-10-27T14:39:25Z","deletionTimestamp":"2021-10-27T14:39:58Z","deletionGracePeriodSeconds":30,"labels":{"controller-revision-hash":"577749b6b","daemonset-name":"daemon-set","pod-template-generation":"1"},"annotations":{"cni.projectcalico.org/containerID":"bab2d17ab15a551888f2ef53ee1a108ce3c7777755b8320a3a491cf90dc299f8","cni.projectcalico.org/podIP":"100.96.0.41/32","cni.projectcalico.org/podIPs":"100.96.0.41/32","kubernetes.io/psp":"e2e-test-privileged-psp"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"fcffa4d4-fc6b-4779-a9df-1976abfc424f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"calico","operation":"Update","apiVersion":"v1","time":"2021-10-27T14:39:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}},"subresource":"status"},{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-10-27T14:39:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fcffa4d4-fc6b-4779-a9df-1976abfc424f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-10-27T14:39:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.0.41\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-hb4sj","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1","ports":[{"containerPort":9376,"protocol":"TCP"}],"env":[{"name":"KUBERNETES_SERVICE_HOST","value":"api.tmgxs-skc.it.internal.staging.k8s.ondemand.com"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-hb4sj","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-10-27T14:39:25Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-10-27T14:39:26Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-10-27T14:39:26Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-10-27T14:39:25Z"}],"hostIP":"10.250.0.5","podIP":"100.96.0.41","podIPs":[{"ip":"100.96.0.41"}],"startTime":"2021-10-27T14:39:25Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2021-10-27T14:39:26Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1","imageID":"k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50","containerID":"containerd://19a621f41d3df66bc3acdf2001835af4b6b71feff5efe6b5eaa78b0aba3d4913","started":true}],"qosClass":"BestEffort"}}]} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:39:28.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-685" for this suite. +•{"msg":"PASSED [sig-apps] Daemon set [Serial] should list and delete a collection of DaemonSets [Conformance]","total":346,"completed":109,"skipped":1845,"failed":0} +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Networking Granular Checks: Pods + should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Networking + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:39:28.439: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename pod-network-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pod-network-test-7872 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Performing setup for networking test in namespace pod-network-test-7872 +STEP: creating a selector +STEP: Creating the service pods in kubernetes +Oct 27 14:39:28.627: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +Oct 27 14:39:28.705: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:39:30.717: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:39:32.718: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:39:34.717: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:39:36.718: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:39:38.718: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:39:40.717: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:39:42.718: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:39:44.717: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:39:46.718: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:39:48.718: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:39:50.717: INFO: The status of Pod netserver-0 is Running (Ready = true) +Oct 27 14:39:50.740: INFO: The status of Pod netserver-1 is Running (Ready = true) +STEP: Creating test pods +Oct 27 14:39:54.841: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 +Oct 27 14:39:54.841: INFO: Going to poll 100.96.0.42 on port 8081 at least 0 times, with a maximum of 34 tries before failing +Oct 27 14:39:54.852: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.0.42 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7872 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 14:39:54.852: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 14:39:56.367: INFO: Found all 1 expected endpoints: [netserver-0] +Oct 27 14:39:56.367: INFO: Going to poll 100.96.1.126 on port 8081 at least 0 times, with a maximum of 34 tries before failing +Oct 27 14:39:56.379: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.1.126 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7872 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 14:39:56.379: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 14:39:57.846: INFO: Found all 1 expected endpoints: [netserver-1] +[AfterEach] [sig-network] Networking + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:39:57.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pod-network-test-7872" for this suite. +•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":110,"skipped":1866,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-node] Secrets + should be consumable from pods in env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:39:57.881: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-3899 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating secret with name secret-test-c18cc9ac-8377-4e45-b9ff-e8e6917f61b1 +STEP: Creating a pod to test consume secrets +Oct 27 14:39:58.104: INFO: Waiting up to 5m0s for pod "pod-secrets-46bdc354-99ac-4f9a-a7b5-5f6448b030d0" in namespace "secrets-3899" to be "Succeeded or Failed" +Oct 27 14:39:58.115: INFO: Pod "pod-secrets-46bdc354-99ac-4f9a-a7b5-5f6448b030d0": Phase="Pending", Reason="", readiness=false. Elapsed: 10.950916ms +Oct 27 14:40:00.128: INFO: Pod "pod-secrets-46bdc354-99ac-4f9a-a7b5-5f6448b030d0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023986412s +Oct 27 14:40:02.141: INFO: Pod "pod-secrets-46bdc354-99ac-4f9a-a7b5-5f6448b030d0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037172407s +STEP: Saw pod success +Oct 27 14:40:02.141: INFO: Pod "pod-secrets-46bdc354-99ac-4f9a-a7b5-5f6448b030d0" satisfied condition "Succeeded or Failed" +Oct 27 14:40:02.153: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod pod-secrets-46bdc354-99ac-4f9a-a7b5-5f6448b030d0 container secret-env-test: +STEP: delete the pod +Oct 27 14:40:02.223: INFO: Waiting for pod pod-secrets-46bdc354-99ac-4f9a-a7b5-5f6448b030d0 to disappear +Oct 27 14:40:02.235: INFO: Pod pod-secrets-46bdc354-99ac-4f9a-a7b5-5f6448b030d0 no longer exists +[AfterEach] [sig-node] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:40:02.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-3899" for this suite. +•{"msg":"PASSED [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":346,"completed":111,"skipped":1877,"failed":0} +SSSSSS +------------------------------ +[sig-node] Downward API + should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:40:02.269: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-7930 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward api env vars +Oct 27 14:40:02.478: INFO: Waiting up to 5m0s for pod "downward-api-e87f67c1-e65b-495a-9594-fcb6c497be83" in namespace "downward-api-7930" to be "Succeeded or Failed" +Oct 27 14:40:02.492: INFO: Pod "downward-api-e87f67c1-e65b-495a-9594-fcb6c497be83": Phase="Pending", Reason="", readiness=false. Elapsed: 14.142992ms +Oct 27 14:40:04.510: INFO: Pod "downward-api-e87f67c1-e65b-495a-9594-fcb6c497be83": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032799124s +Oct 27 14:40:06.523: INFO: Pod "downward-api-e87f67c1-e65b-495a-9594-fcb6c497be83": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045056889s +STEP: Saw pod success +Oct 27 14:40:06.523: INFO: Pod "downward-api-e87f67c1-e65b-495a-9594-fcb6c497be83" satisfied condition "Succeeded or Failed" +Oct 27 14:40:06.534: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod downward-api-e87f67c1-e65b-495a-9594-fcb6c497be83 container dapi-container: +STEP: delete the pod +Oct 27 14:40:06.619: INFO: Waiting for pod downward-api-e87f67c1-e65b-495a-9594-fcb6c497be83 to disappear +Oct 27 14:40:06.631: INFO: Pod downward-api-e87f67c1-e65b-495a-9594-fcb6c497be83 no longer exists +[AfterEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:40:06.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-7930" for this suite. +•{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":346,"completed":112,"skipped":1883,"failed":0} + +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:40:06.664: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-9468 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating secret with name secret-test-map-2d3d134b-0452-44cd-9947-9ddc7173cf0a +STEP: Creating a pod to test consume secrets +Oct 27 14:40:06.874: INFO: Waiting up to 5m0s for pod "pod-secrets-9a7baee2-d902-4b42-98ae-e5c099d9019f" in namespace "secrets-9468" to be "Succeeded or Failed" +Oct 27 14:40:06.885: INFO: Pod "pod-secrets-9a7baee2-d902-4b42-98ae-e5c099d9019f": Phase="Pending", Reason="", readiness=false. Elapsed: 11.172162ms +Oct 27 14:40:08.897: INFO: Pod "pod-secrets-9a7baee2-d902-4b42-98ae-e5c099d9019f": Phase="Running", Reason="", readiness=true. Elapsed: 2.022866134s +Oct 27 14:40:10.910: INFO: Pod "pod-secrets-9a7baee2-d902-4b42-98ae-e5c099d9019f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035852075s +STEP: Saw pod success +Oct 27 14:40:10.910: INFO: Pod "pod-secrets-9a7baee2-d902-4b42-98ae-e5c099d9019f" satisfied condition "Succeeded or Failed" +Oct 27 14:40:10.921: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod pod-secrets-9a7baee2-d902-4b42-98ae-e5c099d9019f container secret-volume-test: +STEP: delete the pod +Oct 27 14:40:11.001: INFO: Waiting for pod pod-secrets-9a7baee2-d902-4b42-98ae-e5c099d9019f to disappear +Oct 27 14:40:11.014: INFO: Pod pod-secrets-9a7baee2-d902-4b42-98ae-e5c099d9019f no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:40:11.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-9468" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":346,"completed":113,"skipped":1883,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:40:11.048: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-8215 +STEP: Waiting for a default service account to be provisioned in namespace +[It] optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name cm-test-opt-del-e8c699d4-c136-4f98-bb8c-18bb74003aab +STEP: Creating configMap with name cm-test-opt-upd-70f8c084-f7f6-4533-aa0c-ca7fdd68bdb6 +STEP: Creating the pod +Oct 27 14:40:11.307: INFO: The status of Pod pod-projected-configmaps-2d300b25-64b2-4b01-b700-f9b037f6fd0c is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:40:13.319: INFO: The status of Pod pod-projected-configmaps-2d300b25-64b2-4b01-b700-f9b037f6fd0c is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:40:15.321: INFO: The status of Pod pod-projected-configmaps-2d300b25-64b2-4b01-b700-f9b037f6fd0c is Running (Ready = true) +STEP: Deleting configmap cm-test-opt-del-e8c699d4-c136-4f98-bb8c-18bb74003aab +STEP: Updating configmap cm-test-opt-upd-70f8c084-f7f6-4533-aa0c-ca7fdd68bdb6 +STEP: Creating configMap with name cm-test-opt-create-17bfc441-f39d-4c03-9862-54d5a8c65ddc +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:40:17.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-8215" for this suite. +•{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":114,"skipped":1911,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Namespaces [Serial] + should ensure that all services are removed when a namespace is deleted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:40:17.869: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename namespaces +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in namespaces-2322 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should ensure that all services are removed when a namespace is deleted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a test namespace +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nsdeletetest-3734 +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Creating a service in the namespace +STEP: Deleting the namespace +STEP: Waiting for the namespace to be removed. +STEP: Recreating the namespace +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nsdeletetest-1816 +STEP: Verifying there is no service in the namespace +[AfterEach] [sig-api-machinery] Namespaces [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:40:24.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "namespaces-2322" for this suite. +STEP: Destroying namespace "nsdeletetest-3734" for this suite. +Oct 27 14:40:24.511: INFO: Namespace nsdeletetest-3734 was already deleted +STEP: Destroying namespace "nsdeletetest-1816" for this suite. +•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":346,"completed":115,"skipped":1924,"failed":0} + +------------------------------ +[sig-node] PodTemplates + should delete a collection of pod templates [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] PodTemplates + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:40:24.524: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename podtemplate +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in podtemplate-8322 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should delete a collection of pod templates [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Create set of pod templates +Oct 27 14:40:24.722: INFO: created test-podtemplate-1 +Oct 27 14:40:24.734: INFO: created test-podtemplate-2 +Oct 27 14:40:24.746: INFO: created test-podtemplate-3 +STEP: get a list of pod templates with a label in the current namespace +STEP: delete collection of pod templates +Oct 27 14:40:24.757: INFO: requesting DeleteCollection of pod templates +STEP: check that the list of pod templates matches the requested quantity +Oct 27 14:40:24.779: INFO: requesting list of pod templates to confirm quantity +[AfterEach] [sig-node] PodTemplates + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:40:24.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "podtemplate-8322" for this suite. +•{"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":346,"completed":116,"skipped":1924,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:40:24.816: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-9831 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating projection with secret that has name projected-secret-test-map-4801b27e-4579-404a-808c-94930d58b072 +STEP: Creating a pod to test consume secrets +Oct 27 14:40:25.037: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b151b077-f7a9-495c-9f36-ecfdc705e0ce" in namespace "projected-9831" to be "Succeeded or Failed" +Oct 27 14:40:25.048: INFO: Pod "pod-projected-secrets-b151b077-f7a9-495c-9f36-ecfdc705e0ce": Phase="Pending", Reason="", readiness=false. Elapsed: 11.010126ms +Oct 27 14:40:27.061: INFO: Pod "pod-projected-secrets-b151b077-f7a9-495c-9f36-ecfdc705e0ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024672879s +Oct 27 14:40:29.074: INFO: Pod "pod-projected-secrets-b151b077-f7a9-495c-9f36-ecfdc705e0ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037365795s +STEP: Saw pod success +Oct 27 14:40:29.074: INFO: Pod "pod-projected-secrets-b151b077-f7a9-495c-9f36-ecfdc705e0ce" satisfied condition "Succeeded or Failed" +Oct 27 14:40:29.085: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod pod-projected-secrets-b151b077-f7a9-495c-9f36-ecfdc705e0ce container projected-secret-volume-test: +STEP: delete the pod +Oct 27 14:40:29.159: INFO: Waiting for pod pod-projected-secrets-b151b077-f7a9-495c-9f36-ecfdc705e0ce to disappear +Oct 27 14:40:29.170: INFO: Pod pod-projected-secrets-b151b077-f7a9-495c-9f36-ecfdc705e0ce no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:40:29.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-9831" for this suite. +•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":346,"completed":117,"skipped":1950,"failed":0} +SS +------------------------------ +[sig-node] Variable Expansion + should allow substituting values in a container's args [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:40:29.204: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename var-expansion +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in var-expansion-4463 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should allow substituting values in a container's args [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test substitution in container's args +Oct 27 14:40:29.410: INFO: Waiting up to 5m0s for pod "var-expansion-d85001f0-a311-4b14-832b-752e84ad9d24" in namespace "var-expansion-4463" to be "Succeeded or Failed" +Oct 27 14:40:29.424: INFO: Pod "var-expansion-d85001f0-a311-4b14-832b-752e84ad9d24": Phase="Pending", Reason="", readiness=false. Elapsed: 14.703959ms +Oct 27 14:40:31.438: INFO: Pod "var-expansion-d85001f0-a311-4b14-832b-752e84ad9d24": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028161969s +Oct 27 14:40:33.454: INFO: Pod "var-expansion-d85001f0-a311-4b14-832b-752e84ad9d24": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044690378s +STEP: Saw pod success +Oct 27 14:40:33.454: INFO: Pod "var-expansion-d85001f0-a311-4b14-832b-752e84ad9d24" satisfied condition "Succeeded or Failed" +Oct 27 14:40:33.467: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod var-expansion-d85001f0-a311-4b14-832b-752e84ad9d24 container dapi-container: +STEP: delete the pod +Oct 27 14:40:33.577: INFO: Waiting for pod var-expansion-d85001f0-a311-4b14-832b-752e84ad9d24 to disappear +Oct 27 14:40:33.588: INFO: Pod var-expansion-d85001f0-a311-4b14-832b-752e84ad9d24 no longer exists +[AfterEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:40:33.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-4463" for this suite. +•{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":346,"completed":118,"skipped":1952,"failed":0} +SS +------------------------------ +[sig-node] Security Context + should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:40:33.622: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename security-context +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-2931 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser +Oct 27 14:40:33.829: INFO: Waiting up to 5m0s for pod "security-context-dbaed928-cf89-4637-b7be-1b3b6b206263" in namespace "security-context-2931" to be "Succeeded or Failed" +Oct 27 14:40:33.840: INFO: Pod "security-context-dbaed928-cf89-4637-b7be-1b3b6b206263": Phase="Pending", Reason="", readiness=false. Elapsed: 11.072513ms +Oct 27 14:40:35.853: INFO: Pod "security-context-dbaed928-cf89-4637-b7be-1b3b6b206263": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023774278s +Oct 27 14:40:37.866: INFO: Pod "security-context-dbaed928-cf89-4637-b7be-1b3b6b206263": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036571441s +STEP: Saw pod success +Oct 27 14:40:37.866: INFO: Pod "security-context-dbaed928-cf89-4637-b7be-1b3b6b206263" satisfied condition "Succeeded or Failed" +Oct 27 14:40:37.879: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod security-context-dbaed928-cf89-4637-b7be-1b3b6b206263 container test-container: +STEP: delete the pod +Oct 27 14:40:37.993: INFO: Waiting for pod security-context-dbaed928-cf89-4637-b7be-1b3b6b206263 to disappear +Oct 27 14:40:38.004: INFO: Pod security-context-dbaed928-cf89-4637-b7be-1b3b6b206263 no longer exists +[AfterEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:40:38.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "security-context-2931" for this suite. +•{"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":346,"completed":119,"skipped":1954,"failed":0} + +------------------------------ +[sig-apps] Deployment + should validate Deployment Status endpoints [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:40:38.038: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename deployment +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in deployment-2302 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 +[It] should validate Deployment Status endpoints [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a Deployment +Oct 27 14:40:38.251: INFO: Creating simple deployment test-deployment-2zghh +Oct 27 14:40:38.299: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942438, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942438, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942438, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942438, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-deployment-2zghh-794dd694d8\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 27 14:40:40.312: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942438, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942438, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942438, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942438, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-deployment-2zghh-794dd694d8\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Getting /status +Oct 27 14:40:42.340: INFO: Deployment test-deployment-2zghh has Conditions: [{Available True 2021-10-27 14:40:40 +0000 UTC 2021-10-27 14:40:40 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2021-10-27 14:40:40 +0000 UTC 2021-10-27 14:40:38 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-2zghh-794dd694d8" has successfully progressed.}] +STEP: updating Deployment Status +Oct 27 14:40:42.364: INFO: updatedStatus.Conditions: []v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942440, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942440, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942440, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942438, loc:(*time.Location)(0xa09bc80)}}, Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"test-deployment-2zghh-794dd694d8\" has successfully progressed."}, v1.DeploymentCondition{Type:"StatusUpdate", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"E2E", Message:"Set from e2e test"}} +STEP: watching for the Deployment status to be updated +Oct 27 14:40:42.375: INFO: Observed &Deployment event: ADDED +Oct 27 14:40:42.375: INFO: Observed Deployment test-deployment-2zghh in namespace deployment-2302 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2021-10-27 14:40:38 +0000 UTC 2021-10-27 14:40:38 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-2zghh-794dd694d8"} +Oct 27 14:40:42.375: INFO: Observed &Deployment event: MODIFIED +Oct 27 14:40:42.375: INFO: Observed Deployment test-deployment-2zghh in namespace deployment-2302 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2021-10-27 14:40:38 +0000 UTC 2021-10-27 14:40:38 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-2zghh-794dd694d8"} +Oct 27 14:40:42.375: INFO: Observed Deployment test-deployment-2zghh in namespace deployment-2302 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2021-10-27 14:40:38 +0000 UTC 2021-10-27 14:40:38 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} +Oct 27 14:40:42.375: INFO: Observed &Deployment event: MODIFIED +Oct 27 14:40:42.375: INFO: Observed Deployment test-deployment-2zghh in namespace deployment-2302 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2021-10-27 14:40:38 +0000 UTC 2021-10-27 14:40:38 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} +Oct 27 14:40:42.375: INFO: Observed Deployment test-deployment-2zghh in namespace deployment-2302 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2021-10-27 14:40:38 +0000 UTC 2021-10-27 14:40:38 +0000 UTC ReplicaSetUpdated ReplicaSet "test-deployment-2zghh-794dd694d8" is progressing.} +Oct 27 14:40:42.375: INFO: Observed &Deployment event: MODIFIED +Oct 27 14:40:42.383: INFO: Observed Deployment test-deployment-2zghh in namespace deployment-2302 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2021-10-27 14:40:40 +0000 UTC 2021-10-27 14:40:40 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} +Oct 27 14:40:42.383: INFO: Observed Deployment test-deployment-2zghh in namespace deployment-2302 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2021-10-27 14:40:40 +0000 UTC 2021-10-27 14:40:38 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-2zghh-794dd694d8" has successfully progressed.} +Oct 27 14:40:42.384: INFO: Observed &Deployment event: MODIFIED +Oct 27 14:40:42.384: INFO: Observed Deployment test-deployment-2zghh in namespace deployment-2302 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2021-10-27 14:40:40 +0000 UTC 2021-10-27 14:40:40 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} +Oct 27 14:40:42.384: INFO: Observed Deployment test-deployment-2zghh in namespace deployment-2302 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2021-10-27 14:40:40 +0000 UTC 2021-10-27 14:40:38 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-2zghh-794dd694d8" has successfully progressed.} +Oct 27 14:40:42.384: INFO: Found Deployment test-deployment-2zghh in namespace deployment-2302 with labels: map[e2e:testing name:httpd] annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} +Oct 27 14:40:42.384: INFO: Deployment test-deployment-2zghh has an updated status +STEP: patching the Statefulset Status +Oct 27 14:40:42.384: INFO: Patch payload: {"status":{"conditions":[{"type":"StatusPatched","status":"True"}]}} +Oct 27 14:40:42.397: INFO: Patched status conditions: []v1.DeploymentCondition{v1.DeploymentCondition{Type:"StatusPatched", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"", Message:""}} +STEP: watching for the Deployment status to be patched +Oct 27 14:40:42.407: INFO: Observed &Deployment event: ADDED +Oct 27 14:40:42.407: INFO: Observed deployment test-deployment-2zghh in namespace deployment-2302 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2021-10-27 14:40:38 +0000 UTC 2021-10-27 14:40:38 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-2zghh-794dd694d8"} +Oct 27 14:40:42.407: INFO: Observed &Deployment event: MODIFIED +Oct 27 14:40:42.407: INFO: Observed deployment test-deployment-2zghh in namespace deployment-2302 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2021-10-27 14:40:38 +0000 UTC 2021-10-27 14:40:38 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-2zghh-794dd694d8"} +Oct 27 14:40:42.408: INFO: Observed deployment test-deployment-2zghh in namespace deployment-2302 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2021-10-27 14:40:38 +0000 UTC 2021-10-27 14:40:38 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} +Oct 27 14:40:42.408: INFO: Observed &Deployment event: MODIFIED +Oct 27 14:40:42.408: INFO: Observed deployment test-deployment-2zghh in namespace deployment-2302 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2021-10-27 14:40:38 +0000 UTC 2021-10-27 14:40:38 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} +Oct 27 14:40:42.408: INFO: Observed deployment test-deployment-2zghh in namespace deployment-2302 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2021-10-27 14:40:38 +0000 UTC 2021-10-27 14:40:38 +0000 UTC ReplicaSetUpdated ReplicaSet "test-deployment-2zghh-794dd694d8" is progressing.} +Oct 27 14:40:42.408: INFO: Observed &Deployment event: MODIFIED +Oct 27 14:40:42.408: INFO: Observed deployment test-deployment-2zghh in namespace deployment-2302 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2021-10-27 14:40:40 +0000 UTC 2021-10-27 14:40:40 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} +Oct 27 14:40:42.408: INFO: Observed deployment test-deployment-2zghh in namespace deployment-2302 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2021-10-27 14:40:40 +0000 UTC 2021-10-27 14:40:38 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-2zghh-794dd694d8" has successfully progressed.} +Oct 27 14:40:42.408: INFO: Observed &Deployment event: MODIFIED +Oct 27 14:40:42.408: INFO: Observed deployment test-deployment-2zghh in namespace deployment-2302 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2021-10-27 14:40:40 +0000 UTC 2021-10-27 14:40:40 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} +Oct 27 14:40:42.408: INFO: Observed deployment test-deployment-2zghh in namespace deployment-2302 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2021-10-27 14:40:40 +0000 UTC 2021-10-27 14:40:38 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-2zghh-794dd694d8" has successfully progressed.} +Oct 27 14:40:42.408: INFO: Observed deployment test-deployment-2zghh in namespace deployment-2302 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} +Oct 27 14:40:42.409: INFO: Observed &Deployment event: MODIFIED +Oct 27 14:40:42.409: INFO: Found deployment test-deployment-2zghh in namespace deployment-2302 with labels: map[e2e:testing name:httpd] annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusPatched True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } +Oct 27 14:40:42.409: INFO: Deployment test-deployment-2zghh has a patched status +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 +Oct 27 14:40:42.420: INFO: Deployment "test-deployment-2zghh": +&Deployment{ObjectMeta:{test-deployment-2zghh deployment-2302 acc6ea33-f098-475a-b30a-55b3504a25a1 22602 1 2021-10-27 14:40:38 +0000 UTC map[e2e:testing name:httpd] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2021-10-27 14:40:38 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {e2e.test Update apps/v1 2021-10-27 14:40:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"StatusPatched\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update apps/v1 2021-10-27 14:40:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{e2e: testing,name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[e2e:testing name:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003728238 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:StatusPatched,Status:True,Reason:,Message:,LastUpdateTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,},DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-10-27 14:40:42 +0000 UTC,LastTransitionTime:2021-10-27 14:40:42 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-deployment-2zghh-794dd694d8" has successfully progressed.,LastUpdateTime:2021-10-27 14:40:42 +0000 UTC,LastTransitionTime:2021-10-27 14:40:42 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} + +Oct 27 14:40:42.432: INFO: New ReplicaSet "test-deployment-2zghh-794dd694d8" of Deployment "test-deployment-2zghh": +&ReplicaSet{ObjectMeta:{test-deployment-2zghh-794dd694d8 deployment-2302 08981577-9bf4-4d81-9d82-3670269a5d13 22593 1 2021-10-27 14:40:38 +0000 UTC map[e2e:testing name:httpd pod-template-hash:794dd694d8] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-deployment-2zghh acc6ea33-f098-475a-b30a-55b3504a25a1 0xc003728670 0xc003728671}] [] [{kube-controller-manager Update apps/v1 2021-10-27 14:40:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"acc6ea33-f098-475a-b30a-55b3504a25a1\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 14:40:40 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{e2e: testing,name: httpd,pod-template-hash: 794dd694d8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[e2e:testing name:httpd pod-template-hash:794dd694d8] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003728718 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} +Oct 27 14:40:42.444: INFO: Pod "test-deployment-2zghh-794dd694d8-gx6qj" is available: +&Pod{ObjectMeta:{test-deployment-2zghh-794dd694d8-gx6qj test-deployment-2zghh-794dd694d8- deployment-2302 d2b556dc-f378-4800-9553-19ac46e033c8 22592 0 2021-10-27 14:40:38 +0000 UTC map[e2e:testing name:httpd pod-template-hash:794dd694d8] map[cni.projectcalico.org/containerID:bc17dea823d8d307cf216831743aa3c90e30e3e54403d5931933cc407f42d80c cni.projectcalico.org/podIP:100.96.1.135/32 cni.projectcalico.org/podIPs:100.96.1.135/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet test-deployment-2zghh-794dd694d8 08981577-9bf4-4d81-9d82-3670269a5d13 0xc003728c20 0xc003728c21}] [] [{kube-controller-manager Update v1 2021-10-27 14:40:38 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"08981577-9bf4-4d81-9d82-3670269a5d13\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-27 14:40:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-27 14:40:40 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.1.135\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-l76q2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmgxs-skc.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l76q2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:40:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:40:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:40:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:40:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.4,PodIP:100.96.1.135,StartTime:2021-10-27 14:40:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-27 14:40:39 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://2a3e8412bbb20bc04a25941189b179a499aa5d2597ac1e702330c9f77b4e2c44,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.1.135,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:40:42.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-2302" for this suite. +•{"msg":"PASSED [sig-apps] Deployment should validate Deployment Status endpoints [Conformance]","total":346,"completed":120,"skipped":1954,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should run and stop simple daemon [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:40:42.470: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename daemonsets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in daemonsets-3412 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:142 +[It] should run and stop simple daemon [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating simple DaemonSet "daemon-set" +STEP: Check that daemon pods launch on every node of the cluster. +Oct 27 14:40:42.740: INFO: Number of nodes with available pods: 0 +Oct 27 14:40:42.740: INFO: Node shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8 is running more than one daemon pod +Oct 27 14:40:43.779: INFO: Number of nodes with available pods: 0 +Oct 27 14:40:43.779: INFO: Node shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8 is running more than one daemon pod +Oct 27 14:40:44.774: INFO: Number of nodes with available pods: 1 +Oct 27 14:40:44.774: INFO: Node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 is running more than one daemon pod +Oct 27 14:40:45.775: INFO: Number of nodes with available pods: 2 +Oct 27 14:40:45.775: INFO: Number of running nodes: 2, number of available pods: 2 +STEP: Stop a daemon pod, check that the daemon pod is revived. +Oct 27 14:40:45.843: INFO: Number of nodes with available pods: 1 +Oct 27 14:40:45.843: INFO: Node shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8 is running more than one daemon pod +Oct 27 14:40:46.877: INFO: Number of nodes with available pods: 1 +Oct 27 14:40:46.877: INFO: Node shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8 is running more than one daemon pod +Oct 27 14:40:47.876: INFO: Number of nodes with available pods: 1 +Oct 27 14:40:47.877: INFO: Node shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8 is running more than one daemon pod +Oct 27 14:40:48.878: INFO: Number of nodes with available pods: 1 +Oct 27 14:40:48.878: INFO: Node shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8 is running more than one daemon pod +Oct 27 14:40:49.878: INFO: Number of nodes with available pods: 2 +Oct 27 14:40:49.878: INFO: Number of running nodes: 2, number of available pods: 2 +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:108 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3412, will wait for the garbage collector to delete the pods +Oct 27 14:40:49.964: INFO: Deleting DaemonSet.extensions daemon-set took: 13.109113ms +Oct 27 14:40:50.065: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.566071ms +Oct 27 14:40:52.376: INFO: Number of nodes with available pods: 0 +Oct 27 14:40:52.376: INFO: Number of running nodes: 0, number of available pods: 0 +Oct 27 14:40:52.387: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"22744"},"items":null} + +Oct 27 14:40:52.398: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"22744"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:40:52.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-3412" for this suite. +•{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":346,"completed":121,"skipped":1969,"failed":0} +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-instrumentation] Events + should delete a collection of events [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-instrumentation] Events + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:40:52.469: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename events +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in events-6993 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should delete a collection of events [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Create set of events +Oct 27 14:40:52.671: INFO: created test-event-1 +Oct 27 14:40:52.683: INFO: created test-event-2 +Oct 27 14:40:52.694: INFO: created test-event-3 +STEP: get a list of Events with a label in the current namespace +STEP: delete collection of events +Oct 27 14:40:52.708: INFO: requesting DeleteCollection of events +STEP: check that the list of events matches the requested quantity +Oct 27 14:40:52.734: INFO: requesting list of events to confirm quantity +[AfterEach] [sig-instrumentation] Events + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:40:52.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "events-6993" for this suite. +•{"msg":"PASSED [sig-instrumentation] Events should delete a collection of events [Conformance]","total":346,"completed":122,"skipped":1986,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:40:52.776: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-4115 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating service in namespace services-4115 +STEP: creating service affinity-clusterip-transition in namespace services-4115 +STEP: creating replication controller affinity-clusterip-transition in namespace services-4115 +I1027 14:40:52.994479 5768 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-4115, replica count: 3 +I1027 14:40:56.046179 5768 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Oct 27 14:40:56.068: INFO: Creating new exec pod +Oct 27 14:40:59.108: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-4115 exec execpod-affinity67gv5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' +Oct 27 14:40:59.619: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" +Oct 27 14:40:59.619: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 14:40:59.619: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-4115 exec execpod-affinity67gv5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.64.149.249 80' +Oct 27 14:41:00.154: INFO: stderr: "+ nc -v -t -w 2 100.64.149.249 80\n+ echo hostName\nConnection to 100.64.149.249 80 port [tcp/http] succeeded!\n" +Oct 27 14:41:00.154: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 14:41:00.180: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-4115 exec execpod-affinity67gv5 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://100.64.149.249:80/ ; done' +Oct 27 14:41:00.809: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.149.249:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.149.249:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.149.249:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.149.249:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.149.249:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.149.249:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.149.249:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.149.249:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.149.249:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.149.249:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.149.249:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.149.249:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.149.249:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.149.249:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.149.249:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.149.249:80/\n" +Oct 27 14:41:00.809: INFO: stdout: "\naffinity-clusterip-transition-4rk96\naffinity-clusterip-transition-2pg9t\naffinity-clusterip-transition-rf6wq\naffinity-clusterip-transition-rf6wq\naffinity-clusterip-transition-2pg9t\naffinity-clusterip-transition-rf6wq\naffinity-clusterip-transition-4rk96\naffinity-clusterip-transition-rf6wq\naffinity-clusterip-transition-4rk96\naffinity-clusterip-transition-rf6wq\naffinity-clusterip-transition-rf6wq\naffinity-clusterip-transition-4rk96\naffinity-clusterip-transition-rf6wq\naffinity-clusterip-transition-2pg9t\naffinity-clusterip-transition-rf6wq\naffinity-clusterip-transition-4rk96" +Oct 27 14:41:00.809: INFO: Received response from host: affinity-clusterip-transition-4rk96 +Oct 27 14:41:00.809: INFO: Received response from host: affinity-clusterip-transition-2pg9t +Oct 27 14:41:00.809: INFO: Received response from host: affinity-clusterip-transition-rf6wq +Oct 27 14:41:00.809: INFO: Received response from host: affinity-clusterip-transition-rf6wq +Oct 27 14:41:00.809: INFO: Received response from host: affinity-clusterip-transition-2pg9t +Oct 27 14:41:00.809: INFO: Received response from host: affinity-clusterip-transition-rf6wq +Oct 27 14:41:00.809: INFO: Received response from host: affinity-clusterip-transition-4rk96 +Oct 27 14:41:00.809: INFO: Received response from host: affinity-clusterip-transition-rf6wq +Oct 27 14:41:00.809: INFO: Received response from host: affinity-clusterip-transition-4rk96 +Oct 27 14:41:00.809: INFO: Received response from host: affinity-clusterip-transition-rf6wq +Oct 27 14:41:00.809: INFO: Received response from host: affinity-clusterip-transition-rf6wq +Oct 27 14:41:00.809: INFO: Received response from host: affinity-clusterip-transition-4rk96 +Oct 27 14:41:00.809: INFO: Received response from host: affinity-clusterip-transition-rf6wq +Oct 27 14:41:00.809: INFO: Received response from host: affinity-clusterip-transition-2pg9t +Oct 27 14:41:00.809: INFO: Received response from host: affinity-clusterip-transition-rf6wq +Oct 27 14:41:00.809: INFO: Received response from host: affinity-clusterip-transition-4rk96 +Oct 27 14:41:00.834: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-4115 exec execpod-affinity67gv5 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://100.64.149.249:80/ ; done' +Oct 27 14:41:01.433: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.149.249:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.149.249:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.149.249:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.149.249:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.149.249:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.149.249:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.149.249:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.149.249:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.149.249:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.149.249:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.149.249:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.149.249:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.149.249:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.149.249:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.149.249:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.149.249:80/\n" +Oct 27 14:41:01.433: INFO: stdout: "\naffinity-clusterip-transition-2pg9t\naffinity-clusterip-transition-4rk96\naffinity-clusterip-transition-4rk96\naffinity-clusterip-transition-rf6wq\naffinity-clusterip-transition-rf6wq\naffinity-clusterip-transition-rf6wq\naffinity-clusterip-transition-rf6wq\naffinity-clusterip-transition-rf6wq\naffinity-clusterip-transition-rf6wq\naffinity-clusterip-transition-rf6wq\naffinity-clusterip-transition-rf6wq\naffinity-clusterip-transition-rf6wq\naffinity-clusterip-transition-rf6wq\naffinity-clusterip-transition-rf6wq\naffinity-clusterip-transition-rf6wq\naffinity-clusterip-transition-rf6wq" +Oct 27 14:41:01.433: INFO: Received response from host: affinity-clusterip-transition-2pg9t +Oct 27 14:41:01.433: INFO: Received response from host: affinity-clusterip-transition-4rk96 +Oct 27 14:41:01.433: INFO: Received response from host: affinity-clusterip-transition-4rk96 +Oct 27 14:41:01.433: INFO: Received response from host: affinity-clusterip-transition-rf6wq +Oct 27 14:41:01.433: INFO: Received response from host: affinity-clusterip-transition-rf6wq +Oct 27 14:41:01.433: INFO: Received response from host: affinity-clusterip-transition-rf6wq +Oct 27 14:41:01.433: INFO: Received response from host: affinity-clusterip-transition-rf6wq +Oct 27 14:41:01.433: INFO: Received response from host: affinity-clusterip-transition-rf6wq +Oct 27 14:41:01.433: INFO: Received response from host: affinity-clusterip-transition-rf6wq +Oct 27 14:41:01.433: INFO: Received response from host: affinity-clusterip-transition-rf6wq +Oct 27 14:41:01.433: INFO: Received response from host: affinity-clusterip-transition-rf6wq +Oct 27 14:41:01.433: INFO: Received response from host: affinity-clusterip-transition-rf6wq +Oct 27 14:41:01.433: INFO: Received response from host: affinity-clusterip-transition-rf6wq +Oct 27 14:41:01.433: INFO: Received response from host: affinity-clusterip-transition-rf6wq +Oct 27 14:41:01.433: INFO: Received response from host: affinity-clusterip-transition-rf6wq +Oct 27 14:41:01.433: INFO: Received response from host: affinity-clusterip-transition-rf6wq +Oct 27 14:41:31.435: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-4115 exec execpod-affinity67gv5 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://100.64.149.249:80/ ; done' +Oct 27 14:41:32.054: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.149.249:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.149.249:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.149.249:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.149.249:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.149.249:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.149.249:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.149.249:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.149.249:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.149.249:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.149.249:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.149.249:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.149.249:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.149.249:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.149.249:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.149.249:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.149.249:80/\n" +Oct 27 14:41:32.055: INFO: stdout: "\naffinity-clusterip-transition-rf6wq\naffinity-clusterip-transition-rf6wq\naffinity-clusterip-transition-rf6wq\naffinity-clusterip-transition-rf6wq\naffinity-clusterip-transition-rf6wq\naffinity-clusterip-transition-rf6wq\naffinity-clusterip-transition-rf6wq\naffinity-clusterip-transition-rf6wq\naffinity-clusterip-transition-rf6wq\naffinity-clusterip-transition-rf6wq\naffinity-clusterip-transition-rf6wq\naffinity-clusterip-transition-rf6wq\naffinity-clusterip-transition-rf6wq\naffinity-clusterip-transition-rf6wq\naffinity-clusterip-transition-rf6wq\naffinity-clusterip-transition-rf6wq" +Oct 27 14:41:32.055: INFO: Received response from host: affinity-clusterip-transition-rf6wq +Oct 27 14:41:32.055: INFO: Received response from host: affinity-clusterip-transition-rf6wq +Oct 27 14:41:32.055: INFO: Received response from host: affinity-clusterip-transition-rf6wq +Oct 27 14:41:32.055: INFO: Received response from host: affinity-clusterip-transition-rf6wq +Oct 27 14:41:32.055: INFO: Received response from host: affinity-clusterip-transition-rf6wq +Oct 27 14:41:32.055: INFO: Received response from host: affinity-clusterip-transition-rf6wq +Oct 27 14:41:32.055: INFO: Received response from host: affinity-clusterip-transition-rf6wq +Oct 27 14:41:32.055: INFO: Received response from host: affinity-clusterip-transition-rf6wq +Oct 27 14:41:32.055: INFO: Received response from host: affinity-clusterip-transition-rf6wq +Oct 27 14:41:32.055: INFO: Received response from host: affinity-clusterip-transition-rf6wq +Oct 27 14:41:32.055: INFO: Received response from host: affinity-clusterip-transition-rf6wq +Oct 27 14:41:32.055: INFO: Received response from host: affinity-clusterip-transition-rf6wq +Oct 27 14:41:32.055: INFO: Received response from host: affinity-clusterip-transition-rf6wq +Oct 27 14:41:32.055: INFO: Received response from host: affinity-clusterip-transition-rf6wq +Oct 27 14:41:32.055: INFO: Received response from host: affinity-clusterip-transition-rf6wq +Oct 27 14:41:32.055: INFO: Received response from host: affinity-clusterip-transition-rf6wq +Oct 27 14:41:32.055: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-4115, will wait for the garbage collector to delete the pods +Oct 27 14:41:32.158: INFO: Deleting ReplicationController affinity-clusterip-transition took: 13.463565ms +Oct 27 14:41:32.259: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 100.959015ms +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:41:35.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-4115" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":346,"completed":123,"skipped":2036,"failed":0} +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:41:35.316: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-9510 +STEP: Waiting for a default service account to be provisioned in namespace +[It] updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name configmap-test-upd-5643a5ba-4516-4c4f-a936-4212055dd39d +STEP: Creating the pod +Oct 27 14:41:35.582: INFO: The status of Pod pod-configmaps-3d170dba-7a96-4f85-a219-151f4c4fd036 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:41:37.595: INFO: The status of Pod pod-configmaps-3d170dba-7a96-4f85-a219-151f4c4fd036 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:41:39.594: INFO: The status of Pod pod-configmaps-3d170dba-7a96-4f85-a219-151f4c4fd036 is Running (Ready = true) +STEP: Updating configmap configmap-test-upd-5643a5ba-4516-4c4f-a936-4212055dd39d +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:42:57.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-9510" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":124,"skipped":2056,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:42:57.449: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-7986 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 27 14:42:57.659: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9fdce5ea-61e6-4546-a058-d2ac79a84186" in namespace "downward-api-7986" to be "Succeeded or Failed" +Oct 27 14:42:57.671: INFO: Pod "downwardapi-volume-9fdce5ea-61e6-4546-a058-d2ac79a84186": Phase="Pending", Reason="", readiness=false. Elapsed: 11.54211ms +Oct 27 14:42:59.683: INFO: Pod "downwardapi-volume-9fdce5ea-61e6-4546-a058-d2ac79a84186": Phase="Running", Reason="", readiness=true. Elapsed: 2.024395538s +Oct 27 14:43:01.696: INFO: Pod "downwardapi-volume-9fdce5ea-61e6-4546-a058-d2ac79a84186": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036630445s +STEP: Saw pod success +Oct 27 14:43:01.696: INFO: Pod "downwardapi-volume-9fdce5ea-61e6-4546-a058-d2ac79a84186" satisfied condition "Succeeded or Failed" +Oct 27 14:43:01.707: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod downwardapi-volume-9fdce5ea-61e6-4546-a058-d2ac79a84186 container client-container: +STEP: delete the pod +Oct 27 14:43:01.773: INFO: Waiting for pod downwardapi-volume-9fdce5ea-61e6-4546-a058-d2ac79a84186 to disappear +Oct 27 14:43:01.784: INFO: Pod downwardapi-volume-9fdce5ea-61e6-4546-a058-d2ac79a84186 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:43:01.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-7986" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":125,"skipped":2086,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:43:01.818: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-605 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 27 14:43:02.024: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a1fe1ea9-7490-4385-875f-cb56b71e2924" in namespace "downward-api-605" to be "Succeeded or Failed" +Oct 27 14:43:02.035: INFO: Pod "downwardapi-volume-a1fe1ea9-7490-4385-875f-cb56b71e2924": Phase="Pending", Reason="", readiness=false. Elapsed: 10.982323ms +Oct 27 14:43:04.047: INFO: Pod "downwardapi-volume-a1fe1ea9-7490-4385-875f-cb56b71e2924": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022931479s +Oct 27 14:43:06.060: INFO: Pod "downwardapi-volume-a1fe1ea9-7490-4385-875f-cb56b71e2924": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036236333s +STEP: Saw pod success +Oct 27 14:43:06.060: INFO: Pod "downwardapi-volume-a1fe1ea9-7490-4385-875f-cb56b71e2924" satisfied condition "Succeeded or Failed" +Oct 27 14:43:06.072: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod downwardapi-volume-a1fe1ea9-7490-4385-875f-cb56b71e2924 container client-container: +STEP: delete the pod +Oct 27 14:43:06.136: INFO: Waiting for pod downwardapi-volume-a1fe1ea9-7490-4385-875f-cb56b71e2924 to disappear +Oct 27 14:43:06.147: INFO: Pod downwardapi-volume-a1fe1ea9-7490-4385-875f-cb56b71e2924 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:43:06.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-605" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":346,"completed":126,"skipped":2099,"failed":0} +SSSSSSSS +------------------------------ +[sig-storage] ConfigMap + binary data should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:43:06.183: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-7614 +STEP: Waiting for a default service account to be provisioned in namespace +[It] binary data should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name configmap-test-upd-fb0c6cbd-6626-474f-99ef-3bfce1caf4e9 +STEP: Creating the pod +STEP: Waiting for pod with text data +STEP: Waiting for pod with binary data +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:43:10.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-7614" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":127,"skipped":2107,"failed":0} +SSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should be able to deny custom resource creation, update and deletion [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:43:10.603: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-681 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 27 14:43:11.280: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +Oct 27 14:43:13.316: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942591, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942591, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942591, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942591, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 27 14:43:16.359: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should be able to deny custom resource creation, update and deletion [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:43:16.372: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Registering the custom resource webhook via the AdmissionRegistration API +STEP: Creating a custom resource that should be denied by the webhook +STEP: Creating a custom resource whose deletion would be denied by the webhook +STEP: Updating the custom resource with disallowed data should be denied +STEP: Deleting the custom resource should be denied +STEP: Remove the offending key and value from the custom resource data +STEP: Deleting the updated custom resource should be successful +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:43:20.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-681" for this suite. +STEP: Destroying namespace "webhook-681-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":346,"completed":128,"skipped":2116,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicationController + should release no longer matching pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:43:20.242: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename replication-controller +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replication-controller-9488 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 +[It] should release no longer matching pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Given a ReplicationController is created +STEP: When the matched label of one of its pods change +Oct 27 14:43:20.458: INFO: Pod name pod-release: Found 1 pods out of 1 +STEP: Then the pod is released +[AfterEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:43:20.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replication-controller-9488" for this suite. +•{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":346,"completed":129,"skipped":2161,"failed":0} +SSSS +------------------------------ +[sig-network] DNS + should provide DNS for pods for Hostname [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:43:20.519: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename dns +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-9806 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a test headless service +STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-9806.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-9806.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9806.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-9806.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-9806.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9806.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done + +STEP: creating a pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Oct 27 14:43:25.159: INFO: DNS probes using dns-9806/dns-test-c3bde326-a1a7-48d0-b40f-2767f1f418cd succeeded + +STEP: deleting the pod +STEP: deleting the test headless service +[AfterEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:43:25.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-9806" for this suite. +•{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":346,"completed":130,"skipped":2165,"failed":0} +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints + verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:43:25.229: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename sched-preemption +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-preemption-1793 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 +Oct 27 14:43:25.453: INFO: Waiting up to 1m0s for all nodes to be ready +Oct 27 14:44:25.566: INFO: Waiting for terminating namespaces to be deleted... +[BeforeEach] PriorityClass endpoints + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:44:25.578: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename sched-preemption-path +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-preemption-path-6684 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] PriorityClass endpoints + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:679 +[It] verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:44:25.808: INFO: PriorityClass.scheduling.k8s.io "p1" is invalid: Value: Forbidden: may not be changed in an update. +Oct 27 14:44:25.820: INFO: PriorityClass.scheduling.k8s.io "p2" is invalid: Value: Forbidden: may not be changed in an update. +[AfterEach] PriorityClass endpoints + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:44:25.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-preemption-path-6684" for this suite. +[AfterEach] PriorityClass endpoints + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:693 +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:44:25.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-preemption-1793" for this suite. +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 +•{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]","total":346,"completed":131,"skipped":2183,"failed":0} +S +------------------------------ +[sig-api-machinery] server version + should find the server version [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] server version + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:44:26.017: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename server-version +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in server-version-3935 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should find the server version [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Request ServerVersion +STEP: Confirm major version +Oct 27 14:44:26.211: INFO: Major version: 1 +STEP: Confirm minor version +Oct 27 14:44:26.212: INFO: cleanMinorVersion: 22 +Oct 27 14:44:26.212: INFO: Minor version: 22 +[AfterEach] [sig-api-machinery] server version + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:44:26.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "server-version-3935" for this suite. +•{"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":346,"completed":132,"skipped":2184,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Watchers + should observe add, update, and delete watch notifications on configmaps [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:44:26.238: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename watch +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in watch-4613 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should observe add, update, and delete watch notifications on configmaps [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a watch on configmaps with label A +STEP: creating a watch on configmaps with label B +STEP: creating a watch on configmaps with label A or B +STEP: creating a configmap with label A and ensuring the correct watchers observe the notification +Oct 27 14:44:26.464: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4613 1eb2a6f6-60c5-4b61-a44d-042d372a6097 24267 0 2021-10-27 14:44:26 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-10-27 14:44:26 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 27 14:44:26.464: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4613 1eb2a6f6-60c5-4b61-a44d-042d372a6097 24267 0 2021-10-27 14:44:26 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-10-27 14:44:26 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: modifying configmap A and ensuring the correct watchers observe the notification +Oct 27 14:44:36.488: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4613 1eb2a6f6-60c5-4b61-a44d-042d372a6097 24338 0 2021-10-27 14:44:26 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-10-27 14:44:36 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 27 14:44:36.489: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4613 1eb2a6f6-60c5-4b61-a44d-042d372a6097 24338 0 2021-10-27 14:44:26 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-10-27 14:44:36 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: modifying configmap A again and ensuring the correct watchers observe the notification +Oct 27 14:44:46.514: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4613 1eb2a6f6-60c5-4b61-a44d-042d372a6097 24415 0 2021-10-27 14:44:26 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-10-27 14:44:36 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 27 14:44:46.515: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4613 1eb2a6f6-60c5-4b61-a44d-042d372a6097 24415 0 2021-10-27 14:44:26 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-10-27 14:44:36 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: deleting configmap A and ensuring the correct watchers observe the notification +Oct 27 14:44:56.533: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4613 1eb2a6f6-60c5-4b61-a44d-042d372a6097 24466 0 2021-10-27 14:44:26 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-10-27 14:44:36 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 27 14:44:56.535: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4613 1eb2a6f6-60c5-4b61-a44d-042d372a6097 24466 0 2021-10-27 14:44:26 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-10-27 14:44:36 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: creating a configmap with label B and ensuring the correct watchers observe the notification +Oct 27 14:45:06.551: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-4613 5057e2a9-fd4b-4960-8a1a-1d5c63538aa9 24517 0 2021-10-27 14:45:06 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-10-27 14:45:06 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 27 14:45:06.553: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-4613 5057e2a9-fd4b-4960-8a1a-1d5c63538aa9 24517 0 2021-10-27 14:45:06 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-10-27 14:45:06 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: deleting configmap B and ensuring the correct watchers observe the notification +Oct 27 14:45:16.574: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-4613 5057e2a9-fd4b-4960-8a1a-1d5c63538aa9 24569 0 2021-10-27 14:45:06 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-10-27 14:45:06 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 27 14:45:16.574: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-4613 5057e2a9-fd4b-4960-8a1a-1d5c63538aa9 24569 0 2021-10-27 14:45:06 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-10-27 14:45:06 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +[AfterEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:45:26.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "watch-4613" for this suite. +•{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":346,"completed":133,"skipped":2222,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-auth] ServiceAccounts + should mount an API token into pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:45:26.610: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename svcaccounts +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in svcaccounts-4570 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should mount an API token into pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: getting the auto-created API token +STEP: reading a file in the container +Oct 27 14:45:31.386: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl exec --namespace=svcaccounts-4570 pod-service-account-8bb54d52-d4cd-437b-b799-ab9bf8591183 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' +STEP: reading a file in the container +Oct 27 14:45:32.151: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl exec --namespace=svcaccounts-4570 pod-service-account-8bb54d52-d4cd-437b-b799-ab9bf8591183 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' +STEP: reading a file in the container +Oct 27 14:45:32.675: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl exec --namespace=svcaccounts-4570 pod-service-account-8bb54d52-d4cd-437b-b799-ab9bf8591183 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' +[AfterEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:45:33.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svcaccounts-4570" for this suite. +•{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":346,"completed":134,"skipped":2237,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should be able to change the type from ExternalName to NodePort [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:45:33.192: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-3329 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should be able to change the type from ExternalName to NodePort [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a service externalname-service with the type=ExternalName in namespace services-3329 +STEP: changing the ExternalName service to type=NodePort +STEP: creating replication controller externalname-service in namespace services-3329 +I1027 14:45:33.479269 5768 runners.go:190] Created replication controller with name: externalname-service, namespace: services-3329, replica count: 2 +I1027 14:45:36.531098 5768 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Oct 27 14:45:36.531: INFO: Creating new exec pod +Oct 27 14:45:41.593: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-3329 exec execpod97x9h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' +Oct 27 14:45:42.141: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" +Oct 27 14:45:42.141: INFO: stdout: "" +Oct 27 14:45:43.142: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-3329 exec execpod97x9h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' +Oct 27 14:45:43.639: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" +Oct 27 14:45:43.639: INFO: stdout: "" +Oct 27 14:45:44.143: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-3329 exec execpod97x9h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' +Oct 27 14:45:44.605: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" +Oct 27 14:45:44.606: INFO: stdout: "externalname-service-fvqqd" +Oct 27 14:45:44.606: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-3329 exec execpod97x9h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.70.207.17 80' +Oct 27 14:45:45.082: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 100.70.207.17 80\nConnection to 100.70.207.17 80 port [tcp/http] succeeded!\n" +Oct 27 14:45:45.082: INFO: stdout: "externalname-service-fvqqd" +Oct 27 14:45:45.082: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-3329 exec execpod97x9h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.250.0.5 32270' +Oct 27 14:45:45.579: INFO: stderr: "+ nc -v -t -w 2 10.250.0.5 32270\n+ echo hostName\nConnection to 10.250.0.5 32270 port [tcp/*] succeeded!\n" +Oct 27 14:45:45.579: INFO: stdout: "externalname-service-6snbl" +Oct 27 14:45:45.579: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-3329 exec execpod97x9h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.250.0.4 32270' +Oct 27 14:45:46.097: INFO: stderr: "+ nc -v -t -w 2 10.250.0.4 32270\n+ echo hostName\nConnection to 10.250.0.4 32270 port [tcp/*] succeeded!\n" +Oct 27 14:45:46.097: INFO: stdout: "externalname-service-6snbl" +Oct 27 14:45:46.097: INFO: Cleaning up the ExternalName to NodePort test service +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:45:46.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-3329" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":346,"completed":135,"skipped":2269,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir wrapper volumes + should not cause race condition when used for configmaps [Serial] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir wrapper volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:45:46.158: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir-wrapper +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-wrapper-3190 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not cause race condition when used for configmaps [Serial] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating 50 configmaps +STEP: Creating RC which spawns configmap-volume pods +Oct 27 14:45:46.969: INFO: Pod name wrapped-volume-race-9c5ced4d-4d21-42f0-842a-7228817445ba: Found 0 pods out of 5 +Oct 27 14:45:52.017: INFO: Pod name wrapped-volume-race-9c5ced4d-4d21-42f0-842a-7228817445ba: Found 5 pods out of 5 +STEP: Ensuring each pod is running +STEP: deleting ReplicationController wrapped-volume-race-9c5ced4d-4d21-42f0-842a-7228817445ba in namespace emptydir-wrapper-3190, will wait for the garbage collector to delete the pods +Oct 27 14:45:54.348: INFO: Deleting ReplicationController wrapped-volume-race-9c5ced4d-4d21-42f0-842a-7228817445ba took: 13.290159ms +Oct 27 14:45:54.449: INFO: Terminating ReplicationController wrapped-volume-race-9c5ced4d-4d21-42f0-842a-7228817445ba pods took: 100.672133ms +STEP: Creating RC which spawns configmap-volume pods +Oct 27 14:46:00.293: INFO: Pod name wrapped-volume-race-c039fdd6-866b-4092-ac92-c835dfa618b2: Found 0 pods out of 5 +Oct 27 14:46:05.340: INFO: Pod name wrapped-volume-race-c039fdd6-866b-4092-ac92-c835dfa618b2: Found 5 pods out of 5 +STEP: Ensuring each pod is running +STEP: deleting ReplicationController wrapped-volume-race-c039fdd6-866b-4092-ac92-c835dfa618b2 in namespace emptydir-wrapper-3190, will wait for the garbage collector to delete the pods +Oct 27 14:46:07.514: INFO: Deleting ReplicationController wrapped-volume-race-c039fdd6-866b-4092-ac92-c835dfa618b2 took: 13.385947ms +Oct 27 14:46:07.614: INFO: Terminating ReplicationController wrapped-volume-race-c039fdd6-866b-4092-ac92-c835dfa618b2 pods took: 100.157334ms +STEP: Creating RC which spawns configmap-volume pods +Oct 27 14:46:12.455: INFO: Pod name wrapped-volume-race-3babecf5-18c0-4d3d-86a8-8add16778e63: Found 0 pods out of 5 +Oct 27 14:46:17.501: INFO: Pod name wrapped-volume-race-3babecf5-18c0-4d3d-86a8-8add16778e63: Found 5 pods out of 5 +STEP: Ensuring each pod is running +STEP: deleting ReplicationController wrapped-volume-race-3babecf5-18c0-4d3d-86a8-8add16778e63 in namespace emptydir-wrapper-3190, will wait for the garbage collector to delete the pods +Oct 27 14:46:19.667: INFO: Deleting ReplicationController wrapped-volume-race-3babecf5-18c0-4d3d-86a8-8add16778e63 took: 13.605279ms +Oct 27 14:46:19.768: INFO: Terminating ReplicationController wrapped-volume-race-3babecf5-18c0-4d3d-86a8-8add16778e63 pods took: 101.141688ms +STEP: Cleaning up the configMaps +[AfterEach] [sig-storage] EmptyDir wrapper volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:46:25.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-wrapper-3190" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":346,"completed":136,"skipped":2279,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Pods + should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:46:25.310: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename pods +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-689 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:188 +[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pod +STEP: submitting the pod to kubernetes +Oct 27 14:46:25.523: INFO: The status of Pod pod-update-activedeadlineseconds-7343e550-99af-4fa8-961e-6084a30ce69c is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:46:27.536: INFO: The status of Pod pod-update-activedeadlineseconds-7343e550-99af-4fa8-961e-6084a30ce69c is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:46:29.536: INFO: The status of Pod pod-update-activedeadlineseconds-7343e550-99af-4fa8-961e-6084a30ce69c is Running (Ready = true) +STEP: verifying the pod is in kubernetes +STEP: updating the pod +Oct 27 14:46:30.087: INFO: Successfully updated pod "pod-update-activedeadlineseconds-7343e550-99af-4fa8-961e-6084a30ce69c" +Oct 27 14:46:30.087: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-7343e550-99af-4fa8-961e-6084a30ce69c" in namespace "pods-689" to be "terminated due to deadline exceeded" +Oct 27 14:46:30.099: INFO: Pod "pod-update-activedeadlineseconds-7343e550-99af-4fa8-961e-6084a30ce69c": Phase="Running", Reason="", readiness=true. Elapsed: 11.720251ms +Oct 27 14:46:32.112: INFO: Pod "pod-update-activedeadlineseconds-7343e550-99af-4fa8-961e-6084a30ce69c": Phase="Failed", Reason="DeadlineExceeded", readiness=true. Elapsed: 2.025348661s +Oct 27 14:46:32.113: INFO: Pod "pod-update-activedeadlineseconds-7343e550-99af-4fa8-961e-6084a30ce69c" satisfied condition "terminated due to deadline exceeded" +[AfterEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:46:32.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-689" for this suite. +•{"msg":"PASSED [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":346,"completed":137,"skipped":2331,"failed":0} +SSSSS +------------------------------ +[sig-apps] Deployment + deployment should support proportional scaling [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:46:32.148: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename deployment +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in deployment-1346 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 +[It] deployment should support proportional scaling [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:46:32.336: INFO: Creating deployment "webserver-deployment" +Oct 27 14:46:32.354: INFO: Waiting for observed generation 1 +Oct 27 14:46:34.378: INFO: Waiting for all required pods to come up +Oct 27 14:46:34.400: INFO: Pod name httpd: Found 10 pods out of 10 +STEP: ensuring each pod is running +Oct 27 14:46:38.437: INFO: Waiting for deployment "webserver-deployment" to complete +Oct 27 14:46:38.460: INFO: Updating deployment "webserver-deployment" with a non-existent image +Oct 27 14:46:38.484: INFO: Updating deployment webserver-deployment +Oct 27 14:46:38.484: INFO: Waiting for observed generation 2 +Oct 27 14:46:40.510: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 +Oct 27 14:46:40.522: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 +Oct 27 14:46:40.534: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas +Oct 27 14:46:40.567: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 +Oct 27 14:46:40.567: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 +Oct 27 14:46:40.578: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas +Oct 27 14:46:40.601: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas +Oct 27 14:46:40.601: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 +Oct 27 14:46:40.625: INFO: Updating deployment webserver-deployment +Oct 27 14:46:40.625: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas +Oct 27 14:46:40.650: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 +Oct 27 14:46:42.726: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 +Oct 27 14:46:42.749: INFO: Deployment "webserver-deployment": +&Deployment{ObjectMeta:{webserver-deployment deployment-1346 054ccdc6-f95f-4e19-8700-c7d4a2aefe2b 25604 3 2021-10-27 14:46:32 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-10-27 14:46:32 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 14:46:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0062e9358 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2021-10-27 14:46:40 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-795d758f88" is progressing.,LastUpdateTime:2021-10-27 14:46:40 +0000 UTC,LastTransitionTime:2021-10-27 14:46:32 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} + +Oct 27 14:46:42.761: INFO: New ReplicaSet "webserver-deployment-795d758f88" of Deployment "webserver-deployment": +&ReplicaSet{ObjectMeta:{webserver-deployment-795d758f88 deployment-1346 6de2aee6-4d48-41c1-9606-97ebcd64d344 25601 3 2021-10-27 14:46:38 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 054ccdc6-f95f-4e19-8700-c7d4a2aefe2b 0xc0062e9767 0xc0062e9768}] [] [{kube-controller-manager Update apps/v1 2021-10-27 14:46:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"054ccdc6-f95f-4e19-8700-c7d4a2aefe2b\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 14:46:38 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 795d758f88,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0062e9808 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Oct 27 14:46:42.761: INFO: All old ReplicaSets of Deployment "webserver-deployment": +Oct 27 14:46:42.761: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-847dcfb7fb deployment-1346 2dfe5784-ab66-4212-8bcc-dfc6ce4c9f4b 25599 3 2021-10-27 14:46:32 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 054ccdc6-f95f-4e19-8700-c7d4a2aefe2b 0xc0062e9867 0xc0062e9868}] [] [{kube-controller-manager Update apps/v1 2021-10-27 14:46:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"054ccdc6-f95f-4e19-8700-c7d4a2aefe2b\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 14:46:36 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 847dcfb7fb,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0062e98f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} +Oct 27 14:46:42.804: INFO: Pod "webserver-deployment-795d758f88-2pkd5" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-2pkd5 webserver-deployment-795d758f88- deployment-1346 b0993956-b4fa-4977-9bf5-55b3f1a7625c 25542 0 2021-10-27 14:46:38 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[cni.projectcalico.org/containerID:77270810277fecf3fcffd8c152abe9f9cb0722e00d4d748a08e0c9fc028bbde8 cni.projectcalico.org/podIP:100.96.1.163/32 cni.projectcalico.org/podIPs:100.96.1.163/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 6de2aee6-4d48-41c1-9606-97ebcd64d344 0xc00524f4a7 0xc00524f4a8}] [] [{kube-controller-manager Update v1 2021-10-27 14:46:38 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6de2aee6-4d48-41c1-9606-97ebcd64d344\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 14:46:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2021-10-27 14:46:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-rqsr7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmgxs-skc.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rqsr7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.4,PodIP:,StartTime:2021-10-27 14:46:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 14:46:42.805: INFO: Pod "webserver-deployment-795d758f88-2vs96" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-2vs96 webserver-deployment-795d758f88- deployment-1346 4c31ec59-d813-47e0-8f87-f326f4c28ce5 25649 0 2021-10-27 14:46:40 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[cni.projectcalico.org/containerID:f39d0ca4a0e288df281940c583fb231ffdb8d927a41a38e64f41bf58c206abb8 cni.projectcalico.org/podIP:100.96.0.72/32 cni.projectcalico.org/podIPs:100.96.0.72/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 6de2aee6-4d48-41c1-9606-97ebcd64d344 0xc00524f7c0 0xc00524f7c1}] [] [{kube-controller-manager Update v1 2021-10-27 14:46:40 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6de2aee6-4d48-41c1-9606-97ebcd64d344\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 14:46:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2021-10-27 14:46:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-mcb5c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmgxs-skc.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mcb5c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.5,PodIP:,StartTime:2021-10-27 14:46:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 14:46:42.805: INFO: Pod "webserver-deployment-795d758f88-4hhct" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-4hhct webserver-deployment-795d758f88- deployment-1346 95100352-c6c6-4c31-97dd-81f19f733c1c 25646 0 2021-10-27 14:46:38 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[cni.projectcalico.org/containerID:96924f070e7ffaa2cb7c85c6217cce7a7ff3fa09c0268e6a7ff7524e6f89bfb8 cni.projectcalico.org/podIP:100.96.1.161/32 cni.projectcalico.org/podIPs:100.96.1.161/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 6de2aee6-4d48-41c1-9606-97ebcd64d344 0xc00524fa70 0xc00524fa71}] [] [{kube-controller-manager Update v1 2021-10-27 14:46:38 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6de2aee6-4d48-41c1-9606-97ebcd64d344\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-27 14:46:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-27 14:46:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.1.161\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-vvvkr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmgxs-skc.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vvvkr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.4,PodIP:100.96.1.161,StartTime:2021-10-27 14:46:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.1.161,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 14:46:42.805: INFO: Pod "webserver-deployment-795d758f88-4m988" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-4m988 webserver-deployment-795d758f88- deployment-1346 3918fc5c-775d-4cc7-87c6-c532194be553 25541 0 2021-10-27 14:46:38 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[cni.projectcalico.org/containerID:4373d7e823052e46c0e53cc2302dcef04af647a1811863ddb197621321e537d9 cni.projectcalico.org/podIP:100.96.1.162/32 cni.projectcalico.org/podIPs:100.96.1.162/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 6de2aee6-4d48-41c1-9606-97ebcd64d344 0xc00524fd40 0xc00524fd41}] [] [{kube-controller-manager Update v1 2021-10-27 14:46:38 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6de2aee6-4d48-41c1-9606-97ebcd64d344\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 14:46:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2021-10-27 14:46:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-6w6hw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmgxs-skc.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6w6hw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.4,PodIP:,StartTime:2021-10-27 14:46:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 14:46:42.805: INFO: Pod "webserver-deployment-795d758f88-585cr" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-585cr webserver-deployment-795d758f88- deployment-1346 e0377e7c-7d74-4003-a554-4f2e6ec9ae37 25632 0 2021-10-27 14:46:40 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[cni.projectcalico.org/containerID:1a09e6f55541f6b75a18e8542946db3f71a2f8a89c0b3ce26ea4933f9284cf58 cni.projectcalico.org/podIP:100.96.0.65/32 cni.projectcalico.org/podIPs:100.96.0.65/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 6de2aee6-4d48-41c1-9606-97ebcd64d344 0xc003b6a050 0xc003b6a051}] [] [{kube-controller-manager Update v1 2021-10-27 14:46:40 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6de2aee6-4d48-41c1-9606-97ebcd64d344\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 14:46:40 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2021-10-27 14:46:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-xhrzp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmgxs-skc.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xhrzp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.5,PodIP:,StartTime:2021-10-27 14:46:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 14:46:42.805: INFO: Pod "webserver-deployment-795d758f88-krgmc" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-krgmc webserver-deployment-795d758f88- deployment-1346 400d5392-0ddf-4b8f-8ced-6387f6291426 25642 0 2021-10-27 14:46:40 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[cni.projectcalico.org/containerID:002ba9e48227a614a9fbcdb71073e99eb41ee39460df99ce0ab83ec6368644da cni.projectcalico.org/podIP:100.96.1.172/32 cni.projectcalico.org/podIPs:100.96.1.172/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 6de2aee6-4d48-41c1-9606-97ebcd64d344 0xc003b6a280 0xc003b6a281}] [] [{kube-controller-manager Update v1 2021-10-27 14:46:40 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6de2aee6-4d48-41c1-9606-97ebcd64d344\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 14:46:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2021-10-27 14:46:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-drh96,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmgxs-skc.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-drh96,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.4,PodIP:,StartTime:2021-10-27 14:46:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 14:46:42.806: INFO: Pod "webserver-deployment-795d758f88-lc4gk" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-lc4gk webserver-deployment-795d758f88- deployment-1346 07304f38-3c73-4fed-9990-7ebd4380b8cd 25640 0 2021-10-27 14:46:40 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[cni.projectcalico.org/containerID:240df71c341d2dc776c8576cb706bff4062d720c059ea7caa2c0b3cff1aaa920 cni.projectcalico.org/podIP:100.96.1.170/32 cni.projectcalico.org/podIPs:100.96.1.170/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 6de2aee6-4d48-41c1-9606-97ebcd64d344 0xc003b6a490 0xc003b6a491}] [] [{kube-controller-manager Update v1 2021-10-27 14:46:40 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6de2aee6-4d48-41c1-9606-97ebcd64d344\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 14:46:40 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2021-10-27 14:46:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-6snbw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmgxs-skc.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6snbw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.4,PodIP:,StartTime:2021-10-27 14:46:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 14:46:42.806: INFO: Pod "webserver-deployment-795d758f88-mffm7" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-mffm7 webserver-deployment-795d758f88- deployment-1346 d4c5c87e-27f4-433c-99a4-7225a4634907 25627 0 2021-10-27 14:46:40 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[cni.projectcalico.org/containerID:68b292344f86ab649fe925a52b4237694ca0bc6580e81d0befe24c3b87111d39 cni.projectcalico.org/podIP:100.96.0.63/32 cni.projectcalico.org/podIPs:100.96.0.63/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 6de2aee6-4d48-41c1-9606-97ebcd64d344 0xc003b6a6a0 0xc003b6a6a1}] [] [{kube-controller-manager Update v1 2021-10-27 14:46:40 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6de2aee6-4d48-41c1-9606-97ebcd64d344\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 14:46:40 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2021-10-27 14:46:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-sj226,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmgxs-skc.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sj226,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.5,PodIP:,StartTime:2021-10-27 14:46:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 14:46:42.806: INFO: Pod "webserver-deployment-795d758f88-n4z9t" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-n4z9t webserver-deployment-795d758f88- deployment-1346 398575b4-538c-4d48-aded-791a80bccdb2 25547 0 2021-10-27 14:46:38 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[cni.projectcalico.org/containerID:9d45ef0a1deb70d83f8132aa74f4ba21822d667e77fc2f9fe15c308bf3b27d32 cni.projectcalico.org/podIP:100.96.0.62/32 cni.projectcalico.org/podIPs:100.96.0.62/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 6de2aee6-4d48-41c1-9606-97ebcd64d344 0xc003b6a8b0 0xc003b6a8b1}] [] [{kube-controller-manager Update v1 2021-10-27 14:46:38 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6de2aee6-4d48-41c1-9606-97ebcd64d344\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 14:46:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2021-10-27 14:46:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-mlx6r,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmgxs-skc.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mlx6r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.5,PodIP:,StartTime:2021-10-27 14:46:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 14:46:42.806: INFO: Pod "webserver-deployment-795d758f88-rqrbv" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-rqrbv webserver-deployment-795d758f88- deployment-1346 5cbfd128-1aef-419b-aecb-6221a547deab 25630 0 2021-10-27 14:46:38 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[cni.projectcalico.org/containerID:e68b8b9bcd5572376409a30918decc0efc37cff21f8232176804899faff120f8 cni.projectcalico.org/podIP:100.96.0.61/32 cni.projectcalico.org/podIPs:100.96.0.61/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 6de2aee6-4d48-41c1-9606-97ebcd64d344 0xc003b6aac0 0xc003b6aac1}] [] [{kube-controller-manager Update v1 2021-10-27 14:46:38 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6de2aee6-4d48-41c1-9606-97ebcd64d344\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-27 14:46:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-27 14:46:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.0.61\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-w7v28,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmgxs-skc.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w7v28,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.5,PodIP:100.96.0.61,StartTime:2021-10-27 14:46:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.0.61,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 14:46:42.806: INFO: Pod "webserver-deployment-795d758f88-rzrh6" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-rzrh6 webserver-deployment-795d758f88- deployment-1346 8ba3e66b-6ba8-47b5-9b4d-73a102efd419 25650 0 2021-10-27 14:46:40 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[cni.projectcalico.org/containerID:2a1ac23d10adbb61ea3370a69dae5faa071fb4291bdb6339c297c39ec7907124 cni.projectcalico.org/podIP:100.96.0.73/32 cni.projectcalico.org/podIPs:100.96.0.73/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 6de2aee6-4d48-41c1-9606-97ebcd64d344 0xc003b6ad00 0xc003b6ad01}] [] [{kube-controller-manager Update v1 2021-10-27 14:46:40 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6de2aee6-4d48-41c1-9606-97ebcd64d344\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 14:46:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2021-10-27 14:46:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-fmkzw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmgxs-skc.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fmkzw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.5,PodIP:,StartTime:2021-10-27 14:46:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 14:46:42.806: INFO: Pod "webserver-deployment-795d758f88-v7bwt" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-v7bwt webserver-deployment-795d758f88- deployment-1346 5d74552d-4819-4052-bd52-8e8234cc24fe 25637 0 2021-10-27 14:46:40 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[cni.projectcalico.org/containerID:dcf3f632ce4c55d3252fad99633775b2b06650a870ab8573dd192570beddc3e3 cni.projectcalico.org/podIP:100.96.1.167/32 cni.projectcalico.org/podIPs:100.96.1.167/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 6de2aee6-4d48-41c1-9606-97ebcd64d344 0xc003b6af10 0xc003b6af11}] [] [{kube-controller-manager Update v1 2021-10-27 14:46:40 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6de2aee6-4d48-41c1-9606-97ebcd64d344\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-27 14:46:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-27 14:46:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-qddf4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmgxs-skc.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qddf4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.4,PodIP:,StartTime:2021-10-27 14:46:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 14:46:42.807: INFO: Pod "webserver-deployment-795d758f88-wf2kq" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-wf2kq webserver-deployment-795d758f88- deployment-1346 759d88fd-b269-4b18-8b73-4c488d1a7958 25644 0 2021-10-27 14:46:40 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[cni.projectcalico.org/containerID:bca1e4a49eff960585aa06109ac1a4a15abf26216dff6be1b8fcf5534011022c cni.projectcalico.org/podIP:100.96.0.68/32 cni.projectcalico.org/podIPs:100.96.0.68/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 6de2aee6-4d48-41c1-9606-97ebcd64d344 0xc003b6b120 0xc003b6b121}] [] [{kube-controller-manager Update v1 2021-10-27 14:46:40 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6de2aee6-4d48-41c1-9606-97ebcd64d344\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 14:46:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2021-10-27 14:46:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-mqjcr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmgxs-skc.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mqjcr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.5,PodIP:,StartTime:2021-10-27 14:46:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 14:46:42.807: INFO: Pod "webserver-deployment-847dcfb7fb-28pn8" is not available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-28pn8 webserver-deployment-847dcfb7fb- deployment-1346 b2499171-f6a7-4d54-8a8b-a2abf54dd6fd 25631 0 2021-10-27 14:46:40 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/containerID:2ae6bd2cb5b47e14f21d204721532a75c6445bc1a1dc3ba5c8d825a0fb3639a3 cni.projectcalico.org/podIP:100.96.0.64/32 cni.projectcalico.org/podIPs:100.96.0.64/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 2dfe5784-ab66-4212-8bcc-dfc6ce4c9f4b 0xc003b6b330 0xc003b6b331}] [] [{kube-controller-manager Update v1 2021-10-27 14:46:40 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2dfe5784-ab66-4212-8bcc-dfc6ce4c9f4b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 14:46:40 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2021-10-27 14:46:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-8dxtt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmgxs-skc.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8dxtt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.5,PodIP:,StartTime:2021-10-27 14:46:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 14:46:42.807: INFO: Pod "webserver-deployment-847dcfb7fb-4nkrp" is available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-4nkrp webserver-deployment-847dcfb7fb- deployment-1346 8a286251-37e8-487e-b2ac-a0905e49eee2 25479 0 2021-10-27 14:46:32 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/containerID:55fa00a828a6bc26c81c85e23ed341ea0f89c55893607a904a14344ca538f00c cni.projectcalico.org/podIP:100.96.1.159/32 cni.projectcalico.org/podIPs:100.96.1.159/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 2dfe5784-ab66-4212-8bcc-dfc6ce4c9f4b 0xc003b6b520 0xc003b6b521}] [] [{kube-controller-manager Update v1 2021-10-27 14:46:32 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2dfe5784-ab66-4212-8bcc-dfc6ce4c9f4b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-27 14:46:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-27 14:46:37 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.1.159\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-9zq55,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmgxs-skc.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9zq55,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.4,PodIP:100.96.1.159,StartTime:2021-10-27 14:46:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-27 14:46:36 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://5563cb3ed1d731195b1a041346320edcf3ba397c5ba5a3e1ab89ee444c9b63cd,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.1.159,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 14:46:42.807: INFO: Pod "webserver-deployment-847dcfb7fb-54z6g" is available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-54z6g webserver-deployment-847dcfb7fb- deployment-1346 bbe7628b-26a8-4217-a0b5-4fcd3854b363 25489 0 2021-10-27 14:46:32 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/containerID:87bb1e5303d9a19cb5cab0fc663b14f4421be860ddadc0723443f2452b5760c7 cni.projectcalico.org/podIP:100.96.1.157/32 cni.projectcalico.org/podIPs:100.96.1.157/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 2dfe5784-ab66-4212-8bcc-dfc6ce4c9f4b 0xc003b6b730 0xc003b6b731}] [] [{kube-controller-manager Update v1 2021-10-27 14:46:32 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2dfe5784-ab66-4212-8bcc-dfc6ce4c9f4b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-27 14:46:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-27 14:46:37 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.1.157\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-bgm72,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmgxs-skc.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bgm72,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.4,PodIP:100.96.1.157,StartTime:2021-10-27 14:46:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-27 14:46:36 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://a358f08c11188cf17c9c310cb23bb599aae7b1937b402b942154506fc573124a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.1.157,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 14:46:42.807: INFO: Pod "webserver-deployment-847dcfb7fb-9685j" is not available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-9685j webserver-deployment-847dcfb7fb- deployment-1346 05fc5d1f-f1ce-4b9a-94dc-c351d677b7f4 25634 0 2021-10-27 14:46:40 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/containerID:c0a0170973ad627614cc2b4f70d2f072b14a5b2a65d81d0fea86425ff41683d5 cni.projectcalico.org/podIP:100.96.0.67/32 cni.projectcalico.org/podIPs:100.96.0.67/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 2dfe5784-ab66-4212-8bcc-dfc6ce4c9f4b 0xc003b6b940 0xc003b6b941}] [] [{kube-controller-manager Update v1 2021-10-27 14:46:40 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2dfe5784-ab66-4212-8bcc-dfc6ce4c9f4b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-27 14:46:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-27 14:46:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-t8nmp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmgxs-skc.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t8nmp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.5,PodIP:,StartTime:2021-10-27 14:46:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 14:46:42.808: INFO: Pod "webserver-deployment-847dcfb7fb-b566z" is not available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-b566z webserver-deployment-847dcfb7fb- deployment-1346 d8486ded-8ce6-47dd-a861-038753daa6e5 25628 0 2021-10-27 14:46:40 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/containerID:fbf9b9f6e3746973399e193b67b091528704aff907b63986cf2f6793187fcb77 cni.projectcalico.org/podIP:100.96.1.165/32 cni.projectcalico.org/podIPs:100.96.1.165/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 2dfe5784-ab66-4212-8bcc-dfc6ce4c9f4b 0xc003b6bb40 0xc003b6bb41}] [] [{kube-controller-manager Update v1 2021-10-27 14:46:40 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2dfe5784-ab66-4212-8bcc-dfc6ce4c9f4b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-27 14:46:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-27 14:46:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-rnqnj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmgxs-skc.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rnqnj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.4,PodIP:,StartTime:2021-10-27 14:46:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 14:46:42.808: INFO: Pod "webserver-deployment-847dcfb7fb-d6sdm" is not available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-d6sdm webserver-deployment-847dcfb7fb- deployment-1346 3a7468bb-320d-4b3a-9660-2d545e1f301c 25652 0 2021-10-27 14:46:40 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/containerID:d397bf4acb4256c6636678c8c428627a80835bb82fab5e44f92825524ef66992 cni.projectcalico.org/podIP:100.96.0.70/32 cni.projectcalico.org/podIPs:100.96.0.70/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 2dfe5784-ab66-4212-8bcc-dfc6ce4c9f4b 0xc003b6bd30 0xc003b6bd31}] [] [{kube-controller-manager Update v1 2021-10-27 14:46:40 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2dfe5784-ab66-4212-8bcc-dfc6ce4c9f4b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 14:46:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2021-10-27 14:46:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-4952s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmgxs-skc.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4952s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.5,PodIP:,StartTime:2021-10-27 14:46:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 14:46:42.808: INFO: Pod "webserver-deployment-847dcfb7fb-fbtqm" is available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-fbtqm webserver-deployment-847dcfb7fb- deployment-1346 c700080e-0489-47ea-b6f0-0a11a7ba5ee5 25461 0 2021-10-27 14:46:32 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/containerID:5a6605c770109257b2bc86c1900227f485a6c3bf2f0fdd67edf27827a26719d1 cni.projectcalico.org/podIP:100.96.0.59/32 cni.projectcalico.org/podIPs:100.96.0.59/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 2dfe5784-ab66-4212-8bcc-dfc6ce4c9f4b 0xc003b6bf20 0xc003b6bf21}] [] [{kube-controller-manager Update v1 2021-10-27 14:46:32 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2dfe5784-ab66-4212-8bcc-dfc6ce4c9f4b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-27 14:46:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-27 14:46:36 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.0.59\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-7f7xh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmgxs-skc.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7f7xh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.5,PodIP:100.96.0.59,StartTime:2021-10-27 14:46:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-27 14:46:36 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://6a88801beeadd280087537ecac5eac4adcdee191658d8bd56912abece4e653dc,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.0.59,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 14:46:42.808: INFO: Pod "webserver-deployment-847dcfb7fb-ghnhg" is not available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-ghnhg webserver-deployment-847dcfb7fb- deployment-1346 284d9974-4526-4899-b607-debb5cbc1fd2 25639 0 2021-10-27 14:46:40 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/containerID:b00cb020241475e3e696eeb744e68e8d2727c27a13a066bb5142634a49e51b43 cni.projectcalico.org/podIP:100.96.1.168/32 cni.projectcalico.org/podIPs:100.96.1.168/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 2dfe5784-ab66-4212-8bcc-dfc6ce4c9f4b 0xc0029b61b0 0xc0029b61b1}] [] [{kube-controller-manager Update v1 2021-10-27 14:46:40 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2dfe5784-ab66-4212-8bcc-dfc6ce4c9f4b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 14:46:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2021-10-27 14:46:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-pnz4t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmgxs-skc.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pnz4t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.4,PodIP:,StartTime:2021-10-27 14:46:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 14:46:42.808: INFO: Pod "webserver-deployment-847dcfb7fb-glx48" is not available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-glx48 webserver-deployment-847dcfb7fb- deployment-1346 3952dadc-71ff-4640-b43b-395924190a1f 25633 0 2021-10-27 14:46:40 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/containerID:3e2d617b3306f596133916b2c7f2a57fc59fcd1ed361b1b3c558b6a5f41b66f1 cni.projectcalico.org/podIP:100.96.0.66/32 cni.projectcalico.org/podIPs:100.96.0.66/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 2dfe5784-ab66-4212-8bcc-dfc6ce4c9f4b 0xc0029b65c0 0xc0029b65c1}] [] [{kube-controller-manager Update v1 2021-10-27 14:46:40 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2dfe5784-ab66-4212-8bcc-dfc6ce4c9f4b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-27 14:46:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-27 14:46:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-glzcb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmgxs-skc.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-glzcb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.5,PodIP:,StartTime:2021-10-27 14:46:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 14:46:42.809: INFO: Pod "webserver-deployment-847dcfb7fb-gxldw" is available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-gxldw webserver-deployment-847dcfb7fb- deployment-1346 35e4461a-2e72-40a0-b17a-5d0b978c48a6 25467 0 2021-10-27 14:46:32 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/containerID:4764e01e97604e5a088eb7ad87afac5dfba75c62a9827bf47a58ddcdf1be7a53 cni.projectcalico.org/podIP:100.96.0.57/32 cni.projectcalico.org/podIPs:100.96.0.57/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 2dfe5784-ab66-4212-8bcc-dfc6ce4c9f4b 0xc0029b6900 0xc0029b6901}] [] [{kube-controller-manager Update v1 2021-10-27 14:46:32 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2dfe5784-ab66-4212-8bcc-dfc6ce4c9f4b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-27 14:46:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-27 14:46:36 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.0.57\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-z28xp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmgxs-skc.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z28xp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.5,PodIP:100.96.0.57,StartTime:2021-10-27 14:46:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-27 14:46:35 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://f72d024e785b76db8746c86763be152b8bf2ca67b1ea1ac0247b69d2b2c753e3,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.0.57,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 14:46:42.809: INFO: Pod "webserver-deployment-847dcfb7fb-l9942" is not available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-l9942 webserver-deployment-847dcfb7fb- deployment-1346 d47e9356-9d55-4f48-81b4-9114928db76d 25641 0 2021-10-27 14:46:40 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/containerID:224b263680d0ef192403e566419bdb69eeeedfca303ac951544f235f5e034080 cni.projectcalico.org/podIP:100.96.1.171/32 cni.projectcalico.org/podIPs:100.96.1.171/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 2dfe5784-ab66-4212-8bcc-dfc6ce4c9f4b 0xc0029b7100 0xc0029b7101}] [] [{kube-controller-manager Update v1 2021-10-27 14:46:40 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2dfe5784-ab66-4212-8bcc-dfc6ce4c9f4b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 14:46:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2021-10-27 14:46:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-zb9q8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmgxs-skc.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zb9q8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.4,PodIP:,StartTime:2021-10-27 14:46:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 14:46:42.809: INFO: Pod "webserver-deployment-847dcfb7fb-n66hm" is not available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-n66hm webserver-deployment-847dcfb7fb- deployment-1346 c5c74bd4-343c-4334-8e7b-619e8c70f568 25645 0 2021-10-27 14:46:40 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/containerID:6adcb8169a548717bcc2e1e44c7ce152bf67436f835ab9c3aac627571733efc4 cni.projectcalico.org/podIP:100.96.1.169/32 cni.projectcalico.org/podIPs:100.96.1.169/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 2dfe5784-ab66-4212-8bcc-dfc6ce4c9f4b 0xc0029b7480 0xc0029b7481}] [] [{kube-controller-manager Update v1 2021-10-27 14:46:40 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2dfe5784-ab66-4212-8bcc-dfc6ce4c9f4b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 14:46:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2021-10-27 14:46:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-snq7d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmgxs-skc.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-snq7d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.4,PodIP:,StartTime:2021-10-27 14:46:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 14:46:42.809: INFO: Pod "webserver-deployment-847dcfb7fb-nl9fj" is available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-nl9fj webserver-deployment-847dcfb7fb- deployment-1346 c2ca6b0a-5373-4904-9cba-7d99e348edf4 25458 0 2021-10-27 14:46:32 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/containerID:807aec99602683261fbe69a7ef0a9f176ec759f64209e86566fa1285bb1da871 cni.projectcalico.org/podIP:100.96.0.60/32 cni.projectcalico.org/podIPs:100.96.0.60/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 2dfe5784-ab66-4212-8bcc-dfc6ce4c9f4b 0xc0029b77d0 0xc0029b77d1}] [] [{kube-controller-manager Update v1 2021-10-27 14:46:32 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2dfe5784-ab66-4212-8bcc-dfc6ce4c9f4b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-27 14:46:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-27 14:46:36 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.0.60\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-fm8tb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmgxs-skc.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fm8tb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.5,PodIP:100.96.0.60,StartTime:2021-10-27 14:46:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-27 14:46:36 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://48017f340cebd73a5a24f0106e11f32ab5e92823b879283a53884030b93aa9a9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.0.60,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 14:46:42.809: INFO: Pod "webserver-deployment-847dcfb7fb-s9rsb" is not available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-s9rsb webserver-deployment-847dcfb7fb- deployment-1346 9a95aa48-62e3-4bab-b305-99e9e942b633 25635 0 2021-10-27 14:46:40 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/containerID:112bbf4e817924b013cc508853035197899237bf13ef79db03174c6c010b1d5a cni.projectcalico.org/podIP:100.96.1.166/32 cni.projectcalico.org/podIPs:100.96.1.166/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 2dfe5784-ab66-4212-8bcc-dfc6ce4c9f4b 0xc0029b7b90 0xc0029b7b91}] [] [{kube-controller-manager Update v1 2021-10-27 14:46:40 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2dfe5784-ab66-4212-8bcc-dfc6ce4c9f4b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-27 14:46:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-27 14:46:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-4cslg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmgxs-skc.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4cslg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.4,PodIP:,StartTime:2021-10-27 14:46:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 14:46:42.810: INFO: Pod "webserver-deployment-847dcfb7fb-shtsv" is not available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-shtsv webserver-deployment-847dcfb7fb- deployment-1346 83be00cc-13d1-4407-907b-a590df427946 25629 0 2021-10-27 14:46:40 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/containerID:bc90cfcd4dd35420237494725ebbed0bf4582dc0aa300e1d2d74ce3a2ae60ebe cni.projectcalico.org/podIP:100.96.1.164/32 cni.projectcalico.org/podIPs:100.96.1.164/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 2dfe5784-ab66-4212-8bcc-dfc6ce4c9f4b 0xc002f30740 0xc002f30741}] [] [{kube-controller-manager Update v1 2021-10-27 14:46:40 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2dfe5784-ab66-4212-8bcc-dfc6ce4c9f4b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 14:46:40 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2021-10-27 14:46:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-bfktb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmgxs-skc.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bfktb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.4,PodIP:,StartTime:2021-10-27 14:46:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 14:46:42.810: INFO: Pod "webserver-deployment-847dcfb7fb-sxsbp" is not available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-sxsbp webserver-deployment-847dcfb7fb- deployment-1346 8ee043a7-31e8-4b9a-b56f-ff8a5f718296 25647 0 2021-10-27 14:46:40 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/containerID:7c7942672afb5cb758fe0fb83291088863997f42086a5f7acd7f49f7736005e2 cni.projectcalico.org/podIP:100.96.0.69/32 cni.projectcalico.org/podIPs:100.96.0.69/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 2dfe5784-ab66-4212-8bcc-dfc6ce4c9f4b 0xc002f30a30 0xc002f30a31}] [] [{kube-controller-manager Update v1 2021-10-27 14:46:40 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2dfe5784-ab66-4212-8bcc-dfc6ce4c9f4b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 14:46:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2021-10-27 14:46:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-42wz6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmgxs-skc.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-42wz6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.5,PodIP:,StartTime:2021-10-27 14:46:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 14:46:42.810: INFO: Pod "webserver-deployment-847dcfb7fb-vlwrc" is available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-vlwrc webserver-deployment-847dcfb7fb- deployment-1346 845d731e-f49a-43b4-b958-1ddc3b912319 25464 0 2021-10-27 14:46:32 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/containerID:bbb5ec942297daac0204998e5c49df052d976524d835cf4c6bcdeb915405d0fb cni.projectcalico.org/podIP:100.96.0.58/32 cni.projectcalico.org/podIPs:100.96.0.58/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 2dfe5784-ab66-4212-8bcc-dfc6ce4c9f4b 0xc002f30e20 0xc002f30e21}] [] [{kube-controller-manager Update v1 2021-10-27 14:46:32 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2dfe5784-ab66-4212-8bcc-dfc6ce4c9f4b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-27 14:46:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-27 14:46:36 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.0.58\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-9x929,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmgxs-skc.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9x929,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.5,PodIP:100.96.0.58,StartTime:2021-10-27 14:46:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-27 14:46:35 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://cbd84466f5d64deac649d0703570bcdc803f2c20b2e2f05b47c34aabd1c38e7c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.0.58,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 14:46:42.810: INFO: Pod "webserver-deployment-847dcfb7fb-vnl8z" is not available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-vnl8z webserver-deployment-847dcfb7fb- deployment-1346 72fb9da2-4c7c-489c-9478-440892b4328a 25648 0 2021-10-27 14:46:40 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/containerID:8be239c9d0de7e0b21693ca246348fd7afd522b628abdb01a8c09f20d950ac79 cni.projectcalico.org/podIP:100.96.0.71/32 cni.projectcalico.org/podIPs:100.96.0.71/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 2dfe5784-ab66-4212-8bcc-dfc6ce4c9f4b 0xc002f31190 0xc002f31191}] [] [{kube-controller-manager Update v1 2021-10-27 14:46:40 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2dfe5784-ab66-4212-8bcc-dfc6ce4c9f4b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 14:46:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2021-10-27 14:46:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-grrm8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmgxs-skc.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-grrm8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.5,PodIP:,StartTime:2021-10-27 14:46:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 14:46:42.810: INFO: Pod "webserver-deployment-847dcfb7fb-x8jrt" is available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-x8jrt webserver-deployment-847dcfb7fb- deployment-1346 11008636-bb10-4b0f-9e43-72fe0c46416d 25454 0 2021-10-27 14:46:32 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/containerID:f2bfc9e3278ee5e01b05c0b8ad5d1a75ea378a3b31192dd406763791c89e8eb9 cni.projectcalico.org/podIP:100.96.1.155/32 cni.projectcalico.org/podIPs:100.96.1.155/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 2dfe5784-ab66-4212-8bcc-dfc6ce4c9f4b 0xc002f31710 0xc002f31711}] [] [{kube-controller-manager Update v1 2021-10-27 14:46:32 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2dfe5784-ab66-4212-8bcc-dfc6ce4c9f4b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-27 14:46:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-27 14:46:36 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.1.155\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-k82zx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmgxs-skc.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k82zx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.4,PodIP:100.96.1.155,StartTime:2021-10-27 14:46:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-27 14:46:35 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://3698bb6684d966185c230fe698be6f6abb70ffebc9e78362452038a49d0510ab,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.1.155,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 14:46:42.810: INFO: Pod "webserver-deployment-847dcfb7fb-zcpfx" is available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-zcpfx webserver-deployment-847dcfb7fb- deployment-1346 dd281687-4e4c-43e6-8880-0d47bff23d79 25486 0 2021-10-27 14:46:32 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/containerID:aae21e259fea3d0749bc334b035966fb6d0c9f692bf2140bc094977bbbea4217 cni.projectcalico.org/podIP:100.96.1.158/32 cni.projectcalico.org/podIPs:100.96.1.158/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 2dfe5784-ab66-4212-8bcc-dfc6ce4c9f4b 0xc002f31930 0xc002f31931}] [] [{kube-controller-manager Update v1 2021-10-27 14:46:32 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2dfe5784-ab66-4212-8bcc-dfc6ce4c9f4b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-27 14:46:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-27 14:46:37 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.1.158\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-9bdv8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmgxs-skc.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9bdv8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:46:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.4,PodIP:100.96.1.158,StartTime:2021-10-27 14:46:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-27 14:46:36 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://c7ac3560e1357942ba7a1bc8bb4258ab7fc001d31e36b4b19f29859f17791c5a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.1.158,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:46:42.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-1346" for this suite. +•{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":346,"completed":138,"skipped":2336,"failed":0} + +------------------------------ +[sig-api-machinery] Watchers + should be able to start watching from a specific resource version [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:46:42.836: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename watch +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in watch-3264 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to start watching from a specific resource version [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a new configmap +STEP: modifying the configmap once +STEP: modifying the configmap a second time +STEP: deleting the configmap +STEP: creating a watch on configmaps from the resource version returned by the first update +STEP: Expecting to observe notifications for all changes to the configmap after the first update +Oct 27 14:46:43.108: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-3264 b0e94de8-addd-4e1d-9813-7cd957645412 25664 0 2021-10-27 14:46:43 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2021-10-27 14:46:43 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 27 14:46:43.108: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-3264 b0e94de8-addd-4e1d-9813-7cd957645412 25665 0 2021-10-27 14:46:43 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2021-10-27 14:46:43 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +[AfterEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:46:43.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "watch-3264" for this suite. +•{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":346,"completed":139,"skipped":2336,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] EndpointSlice + should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:46:43.134: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename endpointslice +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in endpointslice-8832 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 +[It] should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:46:45.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "endpointslice-8832" for this suite. +•{"msg":"PASSED [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","total":346,"completed":140,"skipped":2378,"failed":0} +SSSSSSSS +------------------------------ +[sig-node] Probing container + should have monotonically increasing restart count [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:46:45.455: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-probe +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-4191 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 +[It] should have monotonically increasing restart count [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating pod liveness-ced0f56c-83ad-4676-84ed-7195fe6ca7f6 in namespace container-probe-4191 +Oct 27 14:46:49.690: INFO: Started pod liveness-ced0f56c-83ad-4676-84ed-7195fe6ca7f6 in namespace container-probe-4191 +STEP: checking the pod's current state and verifying that restartCount is present +Oct 27 14:46:49.702: INFO: Initial restart count of pod liveness-ced0f56c-83ad-4676-84ed-7195fe6ca7f6 is 0 +Oct 27 14:47:07.877: INFO: Restart count of pod container-probe-4191/liveness-ced0f56c-83ad-4676-84ed-7195fe6ca7f6 is now 1 (18.174674661s elapsed) +Oct 27 14:47:28.009: INFO: Restart count of pod container-probe-4191/liveness-ced0f56c-83ad-4676-84ed-7195fe6ca7f6 is now 2 (38.306539856s elapsed) +Oct 27 14:47:48.139: INFO: Restart count of pod container-probe-4191/liveness-ced0f56c-83ad-4676-84ed-7195fe6ca7f6 is now 3 (58.436767655s elapsed) +Oct 27 14:48:08.424: INFO: Restart count of pod container-probe-4191/liveness-ced0f56c-83ad-4676-84ed-7195fe6ca7f6 is now 4 (1m18.722206048s elapsed) +Oct 27 14:49:08.840: INFO: Restart count of pod container-probe-4191/liveness-ced0f56c-83ad-4676-84ed-7195fe6ca7f6 is now 5 (2m19.137824925s elapsed) +STEP: deleting the pod +[AfterEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:49:08.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-4191" for this suite. +•{"msg":"PASSED [sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":346,"completed":141,"skipped":2386,"failed":0} +SSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:49:08.891: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-7171 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 27 14:49:09.104: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8e8d1ce7-4b8a-473c-a16a-5ebbe0fbbefb" in namespace "projected-7171" to be "Succeeded or Failed" +Oct 27 14:49:09.117: INFO: Pod "downwardapi-volume-8e8d1ce7-4b8a-473c-a16a-5ebbe0fbbefb": Phase="Pending", Reason="", readiness=false. Elapsed: 13.606485ms +Oct 27 14:49:11.130: INFO: Pod "downwardapi-volume-8e8d1ce7-4b8a-473c-a16a-5ebbe0fbbefb": Phase="Running", Reason="", readiness=true. Elapsed: 2.026217275s +Oct 27 14:49:13.142: INFO: Pod "downwardapi-volume-8e8d1ce7-4b8a-473c-a16a-5ebbe0fbbefb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03878009s +STEP: Saw pod success +Oct 27 14:49:13.142: INFO: Pod "downwardapi-volume-8e8d1ce7-4b8a-473c-a16a-5ebbe0fbbefb" satisfied condition "Succeeded or Failed" +Oct 27 14:49:13.154: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod downwardapi-volume-8e8d1ce7-4b8a-473c-a16a-5ebbe0fbbefb container client-container: +STEP: delete the pod +Oct 27 14:49:13.263: INFO: Waiting for pod downwardapi-volume-8e8d1ce7-4b8a-473c-a16a-5ebbe0fbbefb to disappear +Oct 27 14:49:13.274: INFO: Pod downwardapi-volume-8e8d1ce7-4b8a-473c-a16a-5ebbe0fbbefb no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:49:13.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-7171" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":346,"completed":142,"skipped":2393,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Proxy server + should support proxy with --port 0 [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:49:13.308: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-5655 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should support proxy with --port 0 [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: starting the proxy server +Oct 27 14:49:13.497: INFO: Asynchronously running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-5655 proxy -p 0 --disable-filter' +STEP: curling proxy /api/ output +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:49:13.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-5655" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":346,"completed":143,"skipped":2405,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should include webhook resources in discovery documents [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:49:13.607: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-4081 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 27 14:49:14.654: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942954, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942954, loc:(*time.Location)(0xa09bc80)}}, Reason:"NewReplicaSetCreated", Message:"Created new replica set \"sample-webhook-deployment-78988fc6cd\""}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942954, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942954, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} +Oct 27 14:49:16.667: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942954, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942954, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942954, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942954, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 27 14:49:19.688: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should include webhook resources in discovery documents [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: fetching the /apis discovery document +STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document +STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document +STEP: fetching the /apis/admissionregistration.k8s.io discovery document +STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document +STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document +STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:49:19.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-4081" for this suite. +STEP: Destroying namespace "webhook-4081-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":346,"completed":144,"skipped":2433,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should retry creating failed daemon pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:49:19.841: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename daemonsets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in daemonsets-3160 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:142 +[It] should retry creating failed daemon pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a simple DaemonSet "daemon-set" +STEP: Check that daemon pods launch on every node of the cluster. +Oct 27 14:49:20.133: INFO: Number of nodes with available pods: 0 +Oct 27 14:49:20.133: INFO: Node shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8 is running more than one daemon pod +Oct 27 14:49:21.167: INFO: Number of nodes with available pods: 0 +Oct 27 14:49:21.167: INFO: Node shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8 is running more than one daemon pod +Oct 27 14:49:22.167: INFO: Number of nodes with available pods: 1 +Oct 27 14:49:22.167: INFO: Node shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8 is running more than one daemon pod +Oct 27 14:49:23.168: INFO: Number of nodes with available pods: 2 +Oct 27 14:49:23.168: INFO: Number of running nodes: 2, number of available pods: 2 +STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. +Oct 27 14:49:23.229: INFO: Number of nodes with available pods: 1 +Oct 27 14:49:23.230: INFO: Node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 is running more than one daemon pod +Oct 27 14:49:24.272: INFO: Number of nodes with available pods: 1 +Oct 27 14:49:24.272: INFO: Node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 is running more than one daemon pod +Oct 27 14:49:25.264: INFO: Number of nodes with available pods: 1 +Oct 27 14:49:25.264: INFO: Node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 is running more than one daemon pod +Oct 27 14:49:26.264: INFO: Number of nodes with available pods: 2 +Oct 27 14:49:26.264: INFO: Number of running nodes: 2, number of available pods: 2 +STEP: Wait for the failed daemon pod to be completely deleted. +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:108 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3160, will wait for the garbage collector to delete the pods +Oct 27 14:49:26.367: INFO: Deleting DaemonSet.extensions daemon-set took: 19.291575ms +Oct 27 14:49:26.468: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.312791ms +Oct 27 14:49:29.580: INFO: Number of nodes with available pods: 0 +Oct 27 14:49:29.580: INFO: Number of running nodes: 0, number of available pods: 0 +Oct 27 14:49:29.592: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"26941"},"items":null} + +Oct 27 14:49:29.603: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"26941"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:49:29.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-3160" for this suite. +•{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":346,"completed":145,"skipped":2457,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-apps] Job + should delete a job [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Job + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:49:29.675: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename job +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in job-7278 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should delete a job [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a job +STEP: Ensuring active pods == parallelism +STEP: delete a job +STEP: deleting Job.batch foo in namespace job-7278, will wait for the garbage collector to delete the pods +Oct 27 14:49:33.969: INFO: Deleting Job.batch foo took: 13.127478ms +Oct 27 14:49:34.070: INFO: Terminating Job.batch foo pods took: 100.588518ms +STEP: Ensuring job was deleted +[AfterEach] [sig-apps] Job + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:50:06.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "job-7278" for this suite. +•{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":346,"completed":146,"skipped":2471,"failed":0} +S +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for CRD preserving unknown fields at the schema root [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:50:06.517: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-8810 +STEP: Waiting for a default service account to be provisioned in namespace +[It] works for CRD preserving unknown fields at the schema root [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:50:06.706: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: client-side validation (kubectl create and apply) allows request with any unknown properties +Oct 27 14:50:09.901: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-8810 --namespace=crd-publish-openapi-8810 create -f -' +Oct 27 14:50:10.452: INFO: stderr: "" +Oct 27 14:50:10.452: INFO: stdout: "e2e-test-crd-publish-openapi-1817-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" +Oct 27 14:50:10.452: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-8810 --namespace=crd-publish-openapi-8810 delete e2e-test-crd-publish-openapi-1817-crds test-cr' +Oct 27 14:50:10.559: INFO: stderr: "" +Oct 27 14:50:10.559: INFO: stdout: "e2e-test-crd-publish-openapi-1817-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" +Oct 27 14:50:10.559: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-8810 --namespace=crd-publish-openapi-8810 apply -f -' +Oct 27 14:50:10.784: INFO: stderr: "" +Oct 27 14:50:10.784: INFO: stdout: "e2e-test-crd-publish-openapi-1817-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" +Oct 27 14:50:10.784: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-8810 --namespace=crd-publish-openapi-8810 delete e2e-test-crd-publish-openapi-1817-crds test-cr' +Oct 27 14:50:10.889: INFO: stderr: "" +Oct 27 14:50:10.889: INFO: stdout: "e2e-test-crd-publish-openapi-1817-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" +STEP: kubectl explain works to explain CR +Oct 27 14:50:10.889: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-8810 explain e2e-test-crd-publish-openapi-1817-crds' +Oct 27 14:50:11.107: INFO: stderr: "" +Oct 27 14:50:11.107: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-1817-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:50:14.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-8810" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":346,"completed":147,"skipped":2472,"failed":0} +SSSS +------------------------------ +[sig-api-machinery] Garbage collector + should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:50:14.829: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename gc +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-1329 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the rc +STEP: delete the rc +STEP: wait for the rc to be deleted +STEP: Gathering metrics +Oct 27 14:50:21.129: INFO: For apiserver_request_total: +For apiserver_request_latency_seconds: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +W1027 14:50:21.129279 5768 metrics_grabber.go:151] Can't find kube-controller-manager pod. Grabbing metrics from kube-controller-manager is disabled. +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:50:21.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-1329" for this suite. +•{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":346,"completed":148,"skipped":2476,"failed":0} +SSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should verify changes to a daemon set status [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:50:21.154: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename daemonsets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in daemonsets-791 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:142 +[It] should verify changes to a daemon set status [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating simple DaemonSet "daemon-set" +STEP: Check that daemon pods launch on every node of the cluster. +Oct 27 14:50:21.437: INFO: Number of nodes with available pods: 0 +Oct 27 14:50:21.438: INFO: Node shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8 is running more than one daemon pod +Oct 27 14:50:22.472: INFO: Number of nodes with available pods: 0 +Oct 27 14:50:22.472: INFO: Node shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8 is running more than one daemon pod +Oct 27 14:50:23.472: INFO: Number of nodes with available pods: 0 +Oct 27 14:50:23.472: INFO: Node shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8 is running more than one daemon pod +Oct 27 14:50:24.473: INFO: Number of nodes with available pods: 0 +Oct 27 14:50:24.473: INFO: Node shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8 is running more than one daemon pod +Oct 27 14:50:25.482: INFO: Number of nodes with available pods: 1 +Oct 27 14:50:25.482: INFO: Node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 is running more than one daemon pod +Oct 27 14:50:26.472: INFO: Number of nodes with available pods: 2 +Oct 27 14:50:26.472: INFO: Number of running nodes: 2, number of available pods: 2 +STEP: Getting /status +Oct 27 14:50:26.496: INFO: Daemon Set daemon-set has Conditions: [] +STEP: updating the DaemonSet Status +Oct 27 14:50:26.521: INFO: updatedStatus.Conditions: []v1.DaemonSetCondition{v1.DaemonSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"E2E", Message:"Set from e2e test"}} +STEP: watching for the daemon set status to be updated +Oct 27 14:50:26.532: INFO: Observed &DaemonSet event: ADDED +Oct 27 14:50:26.532: INFO: Observed &DaemonSet event: MODIFIED +Oct 27 14:50:26.532: INFO: Observed &DaemonSet event: MODIFIED +Oct 27 14:50:26.532: INFO: Observed &DaemonSet event: MODIFIED +Oct 27 14:50:26.532: INFO: Found daemon set daemon-set in namespace daemonsets-791 with labels: map[daemonset-name:daemon-set] annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] +Oct 27 14:50:26.532: INFO: Daemon set daemon-set has an updated status +STEP: patching the DaemonSet Status +STEP: watching for the daemon set status to be patched +Oct 27 14:50:26.555: INFO: Observed &DaemonSet event: ADDED +Oct 27 14:50:26.556: INFO: Observed &DaemonSet event: MODIFIED +Oct 27 14:50:26.556: INFO: Observed &DaemonSet event: MODIFIED +Oct 27 14:50:26.556: INFO: Observed &DaemonSet event: MODIFIED +Oct 27 14:50:26.556: INFO: Observed daemon set daemon-set in namespace daemonsets-791 with annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] +Oct 27 14:50:26.556: INFO: Observed &DaemonSet event: MODIFIED +Oct 27 14:50:26.556: INFO: Found daemon set daemon-set in namespace daemonsets-791 with labels: map[daemonset-name:daemon-set] annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusPatched True 0001-01-01 00:00:00 +0000 UTC }] +Oct 27 14:50:26.556: INFO: Daemon set daemon-set has a patched status +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:108 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-791, will wait for the garbage collector to delete the pods +Oct 27 14:50:26.645: INFO: Deleting DaemonSet.extensions daemon-set took: 12.276838ms +Oct 27 14:50:26.745: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.264349ms +Oct 27 14:50:29.657: INFO: Number of nodes with available pods: 0 +Oct 27 14:50:29.657: INFO: Number of running nodes: 0, number of available pods: 0 +Oct 27 14:50:29.668: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"27478"},"items":null} + +Oct 27 14:50:29.679: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"27478"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:50:29.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-791" for this suite. +•{"msg":"PASSED [sig-apps] Daemon set [Serial] should verify changes to a daemon set status [Conformance]","total":346,"completed":149,"skipped":2483,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] ConfigMap + should be consumable via environment variable [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:50:29.751: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-8350 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable via environment variable [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap configmap-8350/configmap-test-419baba6-1eff-4195-818d-ceabc272d9a2 +STEP: Creating a pod to test consume configMaps +Oct 27 14:50:29.965: INFO: Waiting up to 5m0s for pod "pod-configmaps-6be61d75-224e-42e3-b236-839b464d2649" in namespace "configmap-8350" to be "Succeeded or Failed" +Oct 27 14:50:29.976: INFO: Pod "pod-configmaps-6be61d75-224e-42e3-b236-839b464d2649": Phase="Pending", Reason="", readiness=false. Elapsed: 11.153702ms +Oct 27 14:50:31.988: INFO: Pod "pod-configmaps-6be61d75-224e-42e3-b236-839b464d2649": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023130861s +Oct 27 14:50:34.001: INFO: Pod "pod-configmaps-6be61d75-224e-42e3-b236-839b464d2649": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035887368s +STEP: Saw pod success +Oct 27 14:50:34.001: INFO: Pod "pod-configmaps-6be61d75-224e-42e3-b236-839b464d2649" satisfied condition "Succeeded or Failed" +Oct 27 14:50:34.012: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod pod-configmaps-6be61d75-224e-42e3-b236-839b464d2649 container env-test: +STEP: delete the pod +Oct 27 14:50:34.081: INFO: Waiting for pod pod-configmaps-6be61d75-224e-42e3-b236-839b464d2649 to disappear +Oct 27 14:50:34.093: INFO: Pod pod-configmaps-6be61d75-224e-42e3-b236-839b464d2649 no longer exists +[AfterEach] [sig-node] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:50:34.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-8350" for this suite. +•{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":346,"completed":150,"skipped":2512,"failed":0} +SSSSSSSS +------------------------------ +[sig-node] RuntimeClass + should support RuntimeClasses API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] RuntimeClass + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:50:34.128: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename runtimeclass +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in runtimeclass-9213 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support RuntimeClasses API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: getting /apis +STEP: getting /apis/node.k8s.io +STEP: getting /apis/node.k8s.io/v1 +STEP: creating +STEP: watching +Oct 27 14:50:34.542: INFO: starting watch +STEP: getting +STEP: listing +STEP: patching +STEP: updating +Oct 27 14:50:34.622: INFO: waiting for watch events with expected annotations +STEP: deleting +STEP: deleting a collection +[AfterEach] [sig-node] RuntimeClass + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:50:34.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "runtimeclass-9213" for this suite. +•{"msg":"PASSED [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance]","total":346,"completed":151,"skipped":2520,"failed":0} +SSSS +------------------------------ +[sig-storage] Projected configMap + updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:50:34.707: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-6037 +STEP: Waiting for a default service account to be provisioned in namespace +[It] updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating projection with configMap that has name projected-configmap-test-upd-146efa38-f1f8-4cba-a162-dfdfbc0f1061 +STEP: Creating the pod +Oct 27 14:50:34.947: INFO: The status of Pod pod-projected-configmaps-1a576ca6-2594-4c01-949b-556e3717c5da is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:50:36.959: INFO: The status of Pod pod-projected-configmaps-1a576ca6-2594-4c01-949b-556e3717c5da is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:50:38.961: INFO: The status of Pod pod-projected-configmaps-1a576ca6-2594-4c01-949b-556e3717c5da is Running (Ready = true) +STEP: Updating configmap projected-configmap-test-upd-146efa38-f1f8-4cba-a162-dfdfbc0f1061 +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:52:04.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-6037" for this suite. +•{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":152,"skipped":2524,"failed":0} +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicaSet + Replicaset should have a working scale subresource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:52:04.700: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename replicaset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replicaset-4463 +STEP: Waiting for a default service account to be provisioned in namespace +[It] Replicaset should have a working scale subresource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating replica set "test-rs" that asks for more than the allowed pod quota +Oct 27 14:52:04.907: INFO: Pod name sample-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running +STEP: getting scale subresource +STEP: updating a scale subresource +STEP: verifying the replicaset Spec.Replicas was modified +STEP: Patch a scale subresource +[AfterEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:52:06.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replicaset-4463" for this suite. +•{"msg":"PASSED [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]","total":346,"completed":153,"skipped":2541,"failed":0} +SSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:52:07.029: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-3561 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0644 on node default medium +Oct 27 14:52:07.241: INFO: Waiting up to 5m0s for pod "pod-8226e2b6-31fb-463e-a390-ffbe7b24ffb7" in namespace "emptydir-3561" to be "Succeeded or Failed" +Oct 27 14:52:07.252: INFO: Pod "pod-8226e2b6-31fb-463e-a390-ffbe7b24ffb7": Phase="Pending", Reason="", readiness=false. Elapsed: 11.046554ms +Oct 27 14:52:09.271: INFO: Pod "pod-8226e2b6-31fb-463e-a390-ffbe7b24ffb7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030128584s +Oct 27 14:52:11.285: INFO: Pod "pod-8226e2b6-31fb-463e-a390-ffbe7b24ffb7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04423936s +STEP: Saw pod success +Oct 27 14:52:11.285: INFO: Pod "pod-8226e2b6-31fb-463e-a390-ffbe7b24ffb7" satisfied condition "Succeeded or Failed" +Oct 27 14:52:11.298: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod pod-8226e2b6-31fb-463e-a390-ffbe7b24ffb7 container test-container: +STEP: delete the pod +Oct 27 14:52:11.363: INFO: Waiting for pod pod-8226e2b6-31fb-463e-a390-ffbe7b24ffb7 to disappear +Oct 27 14:52:11.374: INFO: Pod pod-8226e2b6-31fb-463e-a390-ffbe7b24ffb7 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:52:11.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-3561" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":154,"skipped":2548,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-instrumentation] Events API + should delete a collection of events [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-instrumentation] Events API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:52:11.409: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename events +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in events-9317 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-instrumentation] Events API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 +[It] should delete a collection of events [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Create set of events +STEP: get a list of Events with a label in the current namespace +STEP: delete a list of events +Oct 27 14:52:11.643: INFO: requesting DeleteCollection of events +STEP: check that the list of events matches the requested quantity +[AfterEach] [sig-instrumentation] Events API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:52:11.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "events-9317" for this suite. +•{"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":346,"completed":155,"skipped":2563,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:52:11.706: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-9649 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name projected-configmap-test-volume-map-6362caf7-f87d-4017-b01b-5b65fed94ae6 +STEP: Creating a pod to test consume configMaps +Oct 27 14:52:11.924: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2fb47910-c1c3-4d28-bcf1-32148d05c732" in namespace "projected-9649" to be "Succeeded or Failed" +Oct 27 14:52:11.935: INFO: Pod "pod-projected-configmaps-2fb47910-c1c3-4d28-bcf1-32148d05c732": Phase="Pending", Reason="", readiness=false. Elapsed: 11.296586ms +Oct 27 14:52:13.948: INFO: Pod "pod-projected-configmaps-2fb47910-c1c3-4d28-bcf1-32148d05c732": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024248179s +Oct 27 14:52:15.960: INFO: Pod "pod-projected-configmaps-2fb47910-c1c3-4d28-bcf1-32148d05c732": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036458575s +STEP: Saw pod success +Oct 27 14:52:15.960: INFO: Pod "pod-projected-configmaps-2fb47910-c1c3-4d28-bcf1-32148d05c732" satisfied condition "Succeeded or Failed" +Oct 27 14:52:15.973: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod pod-projected-configmaps-2fb47910-c1c3-4d28-bcf1-32148d05c732 container agnhost-container: +STEP: delete the pod +Oct 27 14:52:16.038: INFO: Waiting for pod pod-projected-configmaps-2fb47910-c1c3-4d28-bcf1-32148d05c732 to disappear +Oct 27 14:52:16.049: INFO: Pod pod-projected-configmaps-2fb47910-c1c3-4d28-bcf1-32148d05c732 no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:52:16.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-9649" for this suite. +•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":156,"skipped":2576,"failed":0} +SSS +------------------------------ +[sig-network] Services + should serve a basic endpoint from pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:52:16.083: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-6274 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should serve a basic endpoint from pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating service endpoint-test2 in namespace services-6274 +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6274 to expose endpoints map[] +Oct 27 14:52:16.323: INFO: successfully validated that service endpoint-test2 in namespace services-6274 exposes endpoints map[] +STEP: Creating pod pod1 in namespace services-6274 +Oct 27 14:52:16.354: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:52:18.366: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:52:20.367: INFO: The status of Pod pod1 is Running (Ready = true) +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6274 to expose endpoints map[pod1:[80]] +Oct 27 14:52:20.423: INFO: successfully validated that service endpoint-test2 in namespace services-6274 exposes endpoints map[pod1:[80]] +STEP: Checking if the Service forwards traffic to pod1 +Oct 27 14:52:20.423: INFO: Creating new exec pod +Oct 27 14:52:25.467: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-6274 exec execpodmsdr7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80' +Oct 27 14:52:25.983: INFO: stderr: "+ nc -v -t -w 2 endpoint-test2 80\n+ echo hostName\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n" +Oct 27 14:52:25.983: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 14:52:25.983: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-6274 exec execpodmsdr7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.67.192.56 80' +Oct 27 14:52:26.476: INFO: stderr: "+ nc -v -t -w 2 100.67.192.56 80\n+ echo hostName\nConnection to 100.67.192.56 80 port [tcp/http] succeeded!\n" +Oct 27 14:52:26.476: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +STEP: Creating pod pod2 in namespace services-6274 +Oct 27 14:52:26.504: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:52:28.518: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:52:30.517: INFO: The status of Pod pod2 is Running (Ready = true) +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6274 to expose endpoints map[pod1:[80] pod2:[80]] +Oct 27 14:52:30.587: INFO: successfully validated that service endpoint-test2 in namespace services-6274 exposes endpoints map[pod1:[80] pod2:[80]] +STEP: Checking if the Service forwards traffic to pod1 and pod2 +Oct 27 14:52:31.588: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-6274 exec execpodmsdr7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80' +Oct 27 14:52:32.086: INFO: stderr: "+ nc -v -t -w 2 endpoint-test2 80\n+ echo hostName\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n" +Oct 27 14:52:32.086: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 14:52:32.086: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-6274 exec execpodmsdr7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.67.192.56 80' +Oct 27 14:52:32.574: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 100.67.192.56 80\nConnection to 100.67.192.56 80 port [tcp/http] succeeded!\n" +Oct 27 14:52:32.574: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +STEP: Deleting pod pod1 in namespace services-6274 +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6274 to expose endpoints map[pod2:[80]] +Oct 27 14:52:32.637: INFO: successfully validated that service endpoint-test2 in namespace services-6274 exposes endpoints map[pod2:[80]] +STEP: Checking if the Service forwards traffic to pod2 +Oct 27 14:52:33.638: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-6274 exec execpodmsdr7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80' +Oct 27 14:52:34.147: INFO: stderr: "+ nc -v -t -w 2 endpoint-test2 80\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n+ echo hostName\n" +Oct 27 14:52:34.147: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 14:52:34.148: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-6274 exec execpodmsdr7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.67.192.56 80' +Oct 27 14:52:34.744: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 100.67.192.56 80\nConnection to 100.67.192.56 80 port [tcp/http] succeeded!\n" +Oct 27 14:52:34.744: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +STEP: Deleting pod pod2 in namespace services-6274 +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6274 to expose endpoints map[] +Oct 27 14:52:34.795: INFO: successfully validated that service endpoint-test2 in namespace services-6274 exposes endpoints map[] +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:52:34.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-6274" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":346,"completed":157,"skipped":2579,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-node] Pods + should contain environment variables for services [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:52:34.849: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename pods +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-6549 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:188 +[It] should contain environment variables for services [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:52:35.065: INFO: The status of Pod server-envvars-8b3c3156-1c1a-44ed-adae-c2e6a084c637 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:52:37.078: INFO: The status of Pod server-envvars-8b3c3156-1c1a-44ed-adae-c2e6a084c637 is Running (Ready = true) +Oct 27 14:52:37.128: INFO: Waiting up to 5m0s for pod "client-envvars-6947c3d7-ea36-4837-8566-0eacce777e23" in namespace "pods-6549" to be "Succeeded or Failed" +Oct 27 14:52:37.140: INFO: Pod "client-envvars-6947c3d7-ea36-4837-8566-0eacce777e23": Phase="Pending", Reason="", readiness=false. Elapsed: 11.6395ms +Oct 27 14:52:39.152: INFO: Pod "client-envvars-6947c3d7-ea36-4837-8566-0eacce777e23": Phase="Running", Reason="", readiness=true. Elapsed: 2.023661269s +Oct 27 14:52:41.164: INFO: Pod "client-envvars-6947c3d7-ea36-4837-8566-0eacce777e23": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036365816s +STEP: Saw pod success +Oct 27 14:52:41.164: INFO: Pod "client-envvars-6947c3d7-ea36-4837-8566-0eacce777e23" satisfied condition "Succeeded or Failed" +Oct 27 14:52:41.176: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod client-envvars-6947c3d7-ea36-4837-8566-0eacce777e23 container env3cont: +STEP: delete the pod +Oct 27 14:52:41.241: INFO: Waiting for pod client-envvars-6947c3d7-ea36-4837-8566-0eacce777e23 to disappear +Oct 27 14:52:41.252: INFO: Pod client-envvars-6947c3d7-ea36-4837-8566-0eacce777e23 no longer exists +[AfterEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:52:41.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-6549" for this suite. +•{"msg":"PASSED [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":346,"completed":158,"skipped":2592,"failed":0} +SSSSSSSSS +------------------------------ +[sig-auth] ServiceAccounts + should guarantee kube-root-ca.crt exist in any namespace [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:52:41.287: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename svcaccounts +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in svcaccounts-1449 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should guarantee kube-root-ca.crt exist in any namespace [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:52:41.486: INFO: Got root ca configmap in namespace "svcaccounts-1449" +Oct 27 14:52:41.499: INFO: Deleted root ca configmap in namespace "svcaccounts-1449" +STEP: waiting for a new root ca configmap created +Oct 27 14:52:42.012: INFO: Recreated root ca configmap in namespace "svcaccounts-1449" +Oct 27 14:52:42.024: INFO: Updated root ca configmap in namespace "svcaccounts-1449" +STEP: waiting for the root ca configmap reconciled +Oct 27 14:52:42.537: INFO: Reconciled root ca configmap in namespace "svcaccounts-1449" +[AfterEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:52:42.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svcaccounts-1449" for this suite. +•{"msg":"PASSED [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]","total":346,"completed":159,"skipped":2601,"failed":0} +S +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for CRD preserving unknown fields in an embedded object [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:52:42.571: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-3851 +STEP: Waiting for a default service account to be provisioned in namespace +[It] works for CRD preserving unknown fields in an embedded object [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:52:42.757: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: client-side validation (kubectl create and apply) allows request with any unknown properties +Oct 27 14:52:45.988: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-3851 --namespace=crd-publish-openapi-3851 create -f -' +Oct 27 14:52:46.537: INFO: stderr: "" +Oct 27 14:52:46.537: INFO: stdout: "e2e-test-crd-publish-openapi-6676-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" +Oct 27 14:52:46.537: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-3851 --namespace=crd-publish-openapi-3851 delete e2e-test-crd-publish-openapi-6676-crds test-cr' +Oct 27 14:52:46.642: INFO: stderr: "" +Oct 27 14:52:46.642: INFO: stdout: "e2e-test-crd-publish-openapi-6676-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" +Oct 27 14:52:46.642: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-3851 --namespace=crd-publish-openapi-3851 apply -f -' +Oct 27 14:52:46.871: INFO: stderr: "" +Oct 27 14:52:46.871: INFO: stdout: "e2e-test-crd-publish-openapi-6676-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" +Oct 27 14:52:46.871: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-3851 --namespace=crd-publish-openapi-3851 delete e2e-test-crd-publish-openapi-6676-crds test-cr' +Oct 27 14:52:46.978: INFO: stderr: "" +Oct 27 14:52:46.978: INFO: stdout: "e2e-test-crd-publish-openapi-6676-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" +STEP: kubectl explain works to explain CR +Oct 27 14:52:46.978: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-3851 explain e2e-test-crd-publish-openapi-6676-crds' +Oct 27 14:52:47.165: INFO: stderr: "" +Oct 27 14:52:47.165: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-6676-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t<>\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:52:50.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-3851" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":346,"completed":160,"skipped":2602,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:52:50.869: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-6636 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating projection with secret that has name projected-secret-test-map-c7f940f4-e739-4baa-b69f-ba5c507c3ad0 +STEP: Creating a pod to test consume secrets +Oct 27 14:52:51.086: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-71940e5d-55cd-461c-90dd-c01a34d9fb2f" in namespace "projected-6636" to be "Succeeded or Failed" +Oct 27 14:52:51.098: INFO: Pod "pod-projected-secrets-71940e5d-55cd-461c-90dd-c01a34d9fb2f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.384591ms +Oct 27 14:52:53.111: INFO: Pod "pod-projected-secrets-71940e5d-55cd-461c-90dd-c01a34d9fb2f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.024827398s +STEP: Saw pod success +Oct 27 14:52:53.111: INFO: Pod "pod-projected-secrets-71940e5d-55cd-461c-90dd-c01a34d9fb2f" satisfied condition "Succeeded or Failed" +Oct 27 14:52:53.122: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod pod-projected-secrets-71940e5d-55cd-461c-90dd-c01a34d9fb2f container projected-secret-volume-test: +STEP: delete the pod +Oct 27 14:52:53.236: INFO: Waiting for pod pod-projected-secrets-71940e5d-55cd-461c-90dd-c01a34d9fb2f to disappear +Oct 27 14:52:53.247: INFO: Pod pod-projected-secrets-71940e5d-55cd-461c-90dd-c01a34d9fb2f no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:52:53.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-6636" for this suite. +•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":161,"skipped":2643,"failed":0} +SSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:52:53.286: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename resourcequota +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-1183 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Counting existing ResourceQuota +STEP: Creating a ResourceQuota +STEP: Ensuring resource quota status is calculated +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:53:00.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-1183" for this suite. +•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":346,"completed":162,"skipped":2652,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:53:00.545: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-3020 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating projection with secret that has name projected-secret-test-5dfa83e1-19cd-4e34-93e5-ed1c079ef655 +STEP: Creating a pod to test consume secrets +Oct 27 14:53:00.769: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0402c818-bd1f-41f4-b0b5-f5d861d195ab" in namespace "projected-3020" to be "Succeeded or Failed" +Oct 27 14:53:00.780: INFO: Pod "pod-projected-secrets-0402c818-bd1f-41f4-b0b5-f5d861d195ab": Phase="Pending", Reason="", readiness=false. Elapsed: 11.321399ms +Oct 27 14:53:02.793: INFO: Pod "pod-projected-secrets-0402c818-bd1f-41f4-b0b5-f5d861d195ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024559421s +Oct 27 14:53:04.806: INFO: Pod "pod-projected-secrets-0402c818-bd1f-41f4-b0b5-f5d861d195ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036806534s +STEP: Saw pod success +Oct 27 14:53:04.806: INFO: Pod "pod-projected-secrets-0402c818-bd1f-41f4-b0b5-f5d861d195ab" satisfied condition "Succeeded or Failed" +Oct 27 14:53:04.817: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod pod-projected-secrets-0402c818-bd1f-41f4-b0b5-f5d861d195ab container projected-secret-volume-test: +STEP: delete the pod +Oct 27 14:53:04.883: INFO: Waiting for pod pod-projected-secrets-0402c818-bd1f-41f4-b0b5-f5d861d195ab to disappear +Oct 27 14:53:04.895: INFO: Pod pod-projected-secrets-0402c818-bd1f-41f4-b0b5-f5d861d195ab no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:53:04.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-3020" for this suite. +•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":163,"skipped":2698,"failed":0} +SSS +------------------------------ +[sig-network] Services + should complete a service status lifecycle [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:53:04.929: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-4550 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should complete a service status lifecycle [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a Service +STEP: watching for the Service to be added +Oct 27 14:53:05.158: INFO: Found Service test-service-vqnj5 in namespace services-4550 with labels: map[test-service-static:true] & ports [{http TCP 80 {0 80 } 0}] +Oct 27 14:53:05.158: INFO: Service test-service-vqnj5 created +STEP: Getting /status +Oct 27 14:53:05.169: INFO: Service test-service-vqnj5 has LoadBalancer: {[]} +STEP: patching the ServiceStatus +STEP: watching for the Service to be patched +Oct 27 14:53:05.191: INFO: observed Service test-service-vqnj5 in namespace services-4550 with annotations: map[] & LoadBalancer: {[]} +Oct 27 14:53:05.191: INFO: Found Service test-service-vqnj5 in namespace services-4550 with annotations: map[patchedstatus:true] & LoadBalancer: {[{203.0.113.1 []}]} +Oct 27 14:53:05.192: INFO: Service test-service-vqnj5 has service status patched +STEP: updating the ServiceStatus +Oct 27 14:53:05.214: INFO: updatedStatus.Conditions: []v1.Condition{v1.Condition{Type:"StatusUpdate", Status:"True", ObservedGeneration:0, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"E2E", Message:"Set from e2e test"}} +STEP: watching for the Service to be updated +Oct 27 14:53:05.224: INFO: Observed Service test-service-vqnj5 in namespace services-4550 with annotations: map[] & Conditions: {[]} +Oct 27 14:53:05.224: INFO: Observed event: &Service{ObjectMeta:{test-service-vqnj5 services-4550 c7cb5c4d-bdc2-4de6-b1ad-56586fa2da62 28716 0 2021-10-27 14:53:05 +0000 UTC map[test-service-static:true] map[patchedstatus:true] [] [] [{e2e.test Update v1 2021-10-27 14:53:05 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:test-service-static":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:sessionAffinity":{},"f:type":{}}} } {e2e.test Update v1 2021-10-27 14:53:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:patchedstatus":{}}},"f:status":{"f:loadBalancer":{"f:ingress":{}}}} status}]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{0 80 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:100.64.44.1,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[100.64.44.1],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{LoadBalancerIngress{IP:203.0.113.1,Hostname:,Ports:[]PortStatus{},},},},Conditions:[]Condition{},},} +Oct 27 14:53:05.225: INFO: Observed event: &Service{ObjectMeta:{test-service-vqnj5 services-4550 c7cb5c4d-bdc2-4de6-b1ad-56586fa2da62 28717 0 2021-10-27 14:53:05 +0000 UTC map[test-service-static:true] map[patchedstatus:true] [] [azure.remedy.gardener.cloud/service] [{e2e.test Update v1 2021-10-27 14:53:05 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:test-service-static":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:sessionAffinity":{},"f:type":{}}} } {e2e.test Update v1 2021-10-27 14:53:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:patchedstatus":{}}},"f:status":{"f:loadBalancer":{"f:ingress":{}}}} status} {remedy-controller-azure Update v1 2021-10-27 14:53:05 +0000 UTC FieldsV1 {"f:metadata":{"f:finalizers":{".":{},"v:\"azure.remedy.gardener.cloud/service\"":{}}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{0 80 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:100.64.44.1,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[100.64.44.1],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{LoadBalancerIngress{IP:203.0.113.1,Hostname:,Ports:[]PortStatus{},},},},Conditions:[]Condition{},},} +Oct 27 14:53:05.225: INFO: Found Service test-service-vqnj5 in namespace services-4550 with annotations: map[patchedstatus:true] & Conditions: [{StatusUpdate True 0 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] +Oct 27 14:53:05.225: INFO: Service test-service-vqnj5 has service status updated +STEP: patching the service +STEP: watching for the Service to be patched +Oct 27 14:53:05.248: INFO: observed Service test-service-vqnj5 in namespace services-4550 with labels: map[test-service-static:true] +Oct 27 14:53:05.248: INFO: observed Service test-service-vqnj5 in namespace services-4550 with labels: map[test-service-static:true] +Oct 27 14:53:05.248: INFO: observed Service test-service-vqnj5 in namespace services-4550 with labels: map[test-service-static:true] +Oct 27 14:53:05.248: INFO: observed Service test-service-vqnj5 in namespace services-4550 with labels: map[test-service-static:true] +Oct 27 14:53:05.248: INFO: Found Service test-service-vqnj5 in namespace services-4550 with labels: map[test-service:patched test-service-static:true] +Oct 27 14:53:05.248: INFO: Service test-service-vqnj5 patched +STEP: deleting the service +STEP: watching for the Service to be deleted +Oct 27 14:53:05.278: INFO: Observed event: ADDED +Oct 27 14:53:05.278: INFO: Observed event: MODIFIED +Oct 27 14:53:05.278: INFO: Observed event: MODIFIED +Oct 27 14:53:05.278: INFO: Observed event: MODIFIED +Oct 27 14:53:05.278: INFO: Observed event: MODIFIED +Oct 27 14:53:05.278: INFO: Observed event: MODIFIED +Oct 27 14:53:05.288: INFO: Found Service test-service-vqnj5 in namespace services-4550 with labels: map[test-service:patched test-service-static:true] & annotations: map[patchedstatus:true] +Oct 27 14:53:05.288: INFO: Service test-service-vqnj5 deleted +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:53:05.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-4550" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should complete a service status lifecycle [Conformance]","total":346,"completed":164,"skipped":2701,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should mutate configmap [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:53:05.313: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-869 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 27 14:53:06.938: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943186, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943186, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943186, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943186, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 27 14:53:08.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943186, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943186, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943186, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943186, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 27 14:53:11.976: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should mutate configmap [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Registering the mutating configmap webhook via the AdmissionRegistration API +STEP: create a configmap that should be updated by the webhook +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:53:12.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-869" for this suite. +STEP: Destroying namespace "webhook-869-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":346,"completed":165,"skipped":2733,"failed":0} +SSSSSSS +------------------------------ +[sig-node] Variable Expansion + should fail substituting values in a volume subpath with backticks [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:53:12.502: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename var-expansion +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in var-expansion-2638 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should fail substituting values in a volume subpath with backticks [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:53:16.728: INFO: Deleting pod "var-expansion-49013ad4-15a6-468e-b1ab-2a7988115ecc" in namespace "var-expansion-2638" +Oct 27 14:53:16.741: INFO: Wait up to 5m0s for pod "var-expansion-49013ad4-15a6-468e-b1ab-2a7988115ecc" to be fully deleted +[AfterEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:53:20.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-2638" for this suite. +•{"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]","total":346,"completed":166,"skipped":2740,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Watchers + should observe an object deletion if it stops meeting the requirements of the selector [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:53:20.801: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename watch +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in watch-6934 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a watch on configmaps with a certain label +STEP: creating a new configmap +STEP: modifying the configmap once +STEP: changing the label value of the configmap +STEP: Expecting to observe a delete notification for the watched object +Oct 27 14:53:21.063: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-6934 2c4f90ac-a291-460c-9584-3b0dac33cef0 28915 0 2021-10-27 14:53:21 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-10-27 14:53:21 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 27 14:53:21.064: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-6934 2c4f90ac-a291-460c-9584-3b0dac33cef0 28916 0 2021-10-27 14:53:21 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-10-27 14:53:21 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 27 14:53:21.064: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-6934 2c4f90ac-a291-460c-9584-3b0dac33cef0 28917 0 2021-10-27 14:53:21 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-10-27 14:53:21 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: modifying the configmap a second time +STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements +STEP: changing the label value of the configmap back +STEP: modifying the configmap a third time +STEP: deleting the configmap +STEP: Expecting to observe an add notification for the watched object when the label value was restored +Oct 27 14:53:31.146: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-6934 2c4f90ac-a291-460c-9584-3b0dac33cef0 28976 0 2021-10-27 14:53:21 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-10-27 14:53:21 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 27 14:53:31.146: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-6934 2c4f90ac-a291-460c-9584-3b0dac33cef0 28977 0 2021-10-27 14:53:21 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-10-27 14:53:21 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 27 14:53:31.146: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-6934 2c4f90ac-a291-460c-9584-3b0dac33cef0 28978 0 2021-10-27 14:53:21 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-10-27 14:53:21 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} +[AfterEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:53:31.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "watch-6934" for this suite. +•{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":346,"completed":167,"skipped":2773,"failed":0} + +------------------------------ +[sig-network] EndpointSlice + should support creating EndpointSlice API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:53:31.180: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename endpointslice +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in endpointslice-9195 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 +[It] should support creating EndpointSlice API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: getting /apis +STEP: getting /apis/discovery.k8s.io +STEP: getting /apis/discovery.k8s.iov1 +STEP: creating +STEP: getting +STEP: listing +STEP: watching +Oct 27 14:53:31.481: INFO: starting watch +STEP: cluster-wide listing +STEP: cluster-wide watching +Oct 27 14:53:31.503: INFO: starting watch +STEP: patching +STEP: updating +Oct 27 14:53:31.550: INFO: waiting for watch events with expected annotations +Oct 27 14:53:31.550: INFO: saw patched and updated annotations +STEP: deleting +STEP: deleting a collection +[AfterEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:53:31.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "endpointslice-9195" for this suite. +•{"msg":"PASSED [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","total":346,"completed":168,"skipped":2773,"failed":0} + +------------------------------ +[sig-network] DNS + should provide DNS for ExternalName services [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:53:31.642: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename dns +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-4288 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide DNS for ExternalName services [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a test externalName service +STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4288.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4288.svc.cluster.local; sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4288.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4288.svc.cluster.local; sleep 1; done + +STEP: creating a pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Oct 27 14:53:36.125: INFO: DNS probes using dns-test-92f57741-918b-42c7-a0c7-5be4fd7d0d1b succeeded + +STEP: deleting the pod +STEP: changing the externalName to bar.example.com +STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4288.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4288.svc.cluster.local; sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4288.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4288.svc.cluster.local; sleep 1; done + +STEP: creating a second pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Oct 27 14:53:40.384: INFO: File wheezy_udp@dns-test-service-3.dns-4288.svc.cluster.local from pod dns-4288/dns-test-9a7c54cc-adba-484c-ab0a-319388aa7765 contains 'foo.example.com. +' instead of 'bar.example.com.' +Oct 27 14:53:40.439: INFO: File jessie_udp@dns-test-service-3.dns-4288.svc.cluster.local from pod dns-4288/dns-test-9a7c54cc-adba-484c-ab0a-319388aa7765 contains 'foo.example.com. +' instead of 'bar.example.com.' +Oct 27 14:53:40.439: INFO: Lookups using dns-4288/dns-test-9a7c54cc-adba-484c-ab0a-319388aa7765 failed for: [wheezy_udp@dns-test-service-3.dns-4288.svc.cluster.local jessie_udp@dns-test-service-3.dns-4288.svc.cluster.local] + +Oct 27 14:53:45.488: INFO: File wheezy_udp@dns-test-service-3.dns-4288.svc.cluster.local from pod dns-4288/dns-test-9a7c54cc-adba-484c-ab0a-319388aa7765 contains 'foo.example.com. +' instead of 'bar.example.com.' +Oct 27 14:53:45.531: INFO: File jessie_udp@dns-test-service-3.dns-4288.svc.cluster.local from pod dns-4288/dns-test-9a7c54cc-adba-484c-ab0a-319388aa7765 contains 'foo.example.com. +' instead of 'bar.example.com.' +Oct 27 14:53:45.531: INFO: Lookups using dns-4288/dns-test-9a7c54cc-adba-484c-ab0a-319388aa7765 failed for: [wheezy_udp@dns-test-service-3.dns-4288.svc.cluster.local jessie_udp@dns-test-service-3.dns-4288.svc.cluster.local] + +Oct 27 14:53:50.470: INFO: File wheezy_udp@dns-test-service-3.dns-4288.svc.cluster.local from pod dns-4288/dns-test-9a7c54cc-adba-484c-ab0a-319388aa7765 contains 'foo.example.com. +' instead of 'bar.example.com.' +Oct 27 14:53:50.500: INFO: File jessie_udp@dns-test-service-3.dns-4288.svc.cluster.local from pod dns-4288/dns-test-9a7c54cc-adba-484c-ab0a-319388aa7765 contains 'foo.example.com. +' instead of 'bar.example.com.' +Oct 27 14:53:50.500: INFO: Lookups using dns-4288/dns-test-9a7c54cc-adba-484c-ab0a-319388aa7765 failed for: [wheezy_udp@dns-test-service-3.dns-4288.svc.cluster.local jessie_udp@dns-test-service-3.dns-4288.svc.cluster.local] + +Oct 27 14:53:55.482: INFO: File wheezy_udp@dns-test-service-3.dns-4288.svc.cluster.local from pod dns-4288/dns-test-9a7c54cc-adba-484c-ab0a-319388aa7765 contains 'foo.example.com. +' instead of 'bar.example.com.' +Oct 27 14:53:55.514: INFO: File jessie_udp@dns-test-service-3.dns-4288.svc.cluster.local from pod dns-4288/dns-test-9a7c54cc-adba-484c-ab0a-319388aa7765 contains 'foo.example.com. +' instead of 'bar.example.com.' +Oct 27 14:53:55.515: INFO: Lookups using dns-4288/dns-test-9a7c54cc-adba-484c-ab0a-319388aa7765 failed for: [wheezy_udp@dns-test-service-3.dns-4288.svc.cluster.local jessie_udp@dns-test-service-3.dns-4288.svc.cluster.local] + +Oct 27 14:54:00.472: INFO: File wheezy_udp@dns-test-service-3.dns-4288.svc.cluster.local from pod dns-4288/dns-test-9a7c54cc-adba-484c-ab0a-319388aa7765 contains 'foo.example.com. +' instead of 'bar.example.com.' +Oct 27 14:54:00.501: INFO: File jessie_udp@dns-test-service-3.dns-4288.svc.cluster.local from pod dns-4288/dns-test-9a7c54cc-adba-484c-ab0a-319388aa7765 contains 'foo.example.com. +' instead of 'bar.example.com.' +Oct 27 14:54:00.501: INFO: Lookups using dns-4288/dns-test-9a7c54cc-adba-484c-ab0a-319388aa7765 failed for: [wheezy_udp@dns-test-service-3.dns-4288.svc.cluster.local jessie_udp@dns-test-service-3.dns-4288.svc.cluster.local] + +Oct 27 14:54:05.471: INFO: File wheezy_udp@dns-test-service-3.dns-4288.svc.cluster.local from pod dns-4288/dns-test-9a7c54cc-adba-484c-ab0a-319388aa7765 contains 'foo.example.com. +' instead of 'bar.example.com.' +Oct 27 14:54:05.500: INFO: File jessie_udp@dns-test-service-3.dns-4288.svc.cluster.local from pod dns-4288/dns-test-9a7c54cc-adba-484c-ab0a-319388aa7765 contains 'foo.example.com. +' instead of 'bar.example.com.' +Oct 27 14:54:05.500: INFO: Lookups using dns-4288/dns-test-9a7c54cc-adba-484c-ab0a-319388aa7765 failed for: [wheezy_udp@dns-test-service-3.dns-4288.svc.cluster.local jessie_udp@dns-test-service-3.dns-4288.svc.cluster.local] + +Oct 27 14:54:10.513: INFO: DNS probes using dns-test-9a7c54cc-adba-484c-ab0a-319388aa7765 succeeded + +STEP: deleting the pod +STEP: changing the service to type=ClusterIP +STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4288.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-4288.svc.cluster.local; sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4288.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-4288.svc.cluster.local; sleep 1; done + +STEP: creating a third pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Oct 27 14:54:14.846: INFO: DNS probes using dns-test-d78e2dc1-294c-4069-8a26-a52a21fe22be succeeded + +STEP: deleting the pod +STEP: deleting the test externalName service +[AfterEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:54:14.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-4288" for this suite. +•{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":346,"completed":169,"skipped":2773,"failed":0} + +------------------------------ +[sig-network] Services + should serve multiport endpoints from pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:54:14.918: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-8277 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should serve multiport endpoints from pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating service multi-endpoint-test in namespace services-8277 +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8277 to expose endpoints map[] +Oct 27 14:54:15.162: INFO: successfully validated that service multi-endpoint-test in namespace services-8277 exposes endpoints map[] +STEP: Creating pod pod1 in namespace services-8277 +Oct 27 14:54:15.191: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:54:17.204: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:54:19.205: INFO: The status of Pod pod1 is Running (Ready = true) +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8277 to expose endpoints map[pod1:[100]] +Oct 27 14:54:19.260: INFO: successfully validated that service multi-endpoint-test in namespace services-8277 exposes endpoints map[pod1:[100]] +STEP: Creating pod pod2 in namespace services-8277 +Oct 27 14:54:19.289: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:54:21.302: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:54:23.302: INFO: The status of Pod pod2 is Running (Ready = true) +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8277 to expose endpoints map[pod1:[100] pod2:[101]] +Oct 27 14:54:23.371: INFO: successfully validated that service multi-endpoint-test in namespace services-8277 exposes endpoints map[pod1:[100] pod2:[101]] +STEP: Checking if the Service forwards traffic to pods +Oct 27 14:54:23.371: INFO: Creating new exec pod +Oct 27 14:54:28.420: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-8277 exec execpodskn59 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' +Oct 27 14:54:28.998: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" +Oct 27 14:54:28.998: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 14:54:28.998: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-8277 exec execpodskn59 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.65.104.179 80' +Oct 27 14:54:29.509: INFO: stderr: "+ nc -v -t -w 2 100.65.104.179 80\n+ echo hostName\nConnection to 100.65.104.179 80 port [tcp/http] succeeded!\n" +Oct 27 14:54:29.509: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 14:54:29.509: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-8277 exec execpodskn59 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' +Oct 27 14:54:29.945: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" +Oct 27 14:54:29.945: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 14:54:29.945: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-8277 exec execpodskn59 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.65.104.179 81' +Oct 27 14:54:30.430: INFO: stderr: "+ nc -v -t -w 2 100.65.104.179 81\n+ echo hostName\nConnection to 100.65.104.179 81 port [tcp/*] succeeded!\n" +Oct 27 14:54:30.430: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +STEP: Deleting pod pod1 in namespace services-8277 +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8277 to expose endpoints map[pod2:[101]] +Oct 27 14:54:31.526: INFO: successfully validated that service multi-endpoint-test in namespace services-8277 exposes endpoints map[pod2:[101]] +STEP: Deleting pod pod2 in namespace services-8277 +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8277 to expose endpoints map[] +Oct 27 14:54:31.577: INFO: successfully validated that service multi-endpoint-test in namespace services-8277 exposes endpoints map[] +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:54:31.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-8277" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":346,"completed":170,"skipped":2773,"failed":0} +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook + should execute poststart exec hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Container Lifecycle Hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:54:31.638: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-lifecycle-hook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-lifecycle-hook-4100 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] when create a pod with lifecycle hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 +STEP: create the container to handle the HTTPGet hook request. +Oct 27 14:54:31.852: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:54:33.865: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:54:35.865: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) +[It] should execute poststart exec hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the pod with lifecycle hook +Oct 27 14:54:35.908: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:54:37.921: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:54:39.921: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = true) +STEP: check poststart hook +STEP: delete the pod with lifecycle hook +Oct 27 14:54:40.038: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Oct 27 14:54:40.049: INFO: Pod pod-with-poststart-exec-hook still exists +Oct 27 14:54:42.050: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Oct 27 14:54:42.062: INFO: Pod pod-with-poststart-exec-hook no longer exists +[AfterEach] [sig-node] Container Lifecycle Hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:54:42.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-lifecycle-hook-4100" for this suite. +•{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":346,"completed":171,"skipped":2795,"failed":0} +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should provide secure master service [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:54:42.097: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-2622 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should provide secure master service [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:54:42.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-2622" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":346,"completed":172,"skipped":2816,"failed":0} +SSSSSSSSS +------------------------------ +[sig-node] Variable Expansion + should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:54:42.320: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename var-expansion +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in var-expansion-6365 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pod with failed condition +STEP: updating the pod +Oct 27 14:56:43.232: INFO: Successfully updated pod "var-expansion-2212c796-ab47-4254-a679-57fb08398598" +STEP: waiting for pod running +STEP: deleting the pod gracefully +Oct 27 14:56:45.256: INFO: Deleting pod "var-expansion-2212c796-ab47-4254-a679-57fb08398598" in namespace "var-expansion-6365" +Oct 27 14:56:45.270: INFO: Wait up to 5m0s for pod "var-expansion-2212c796-ab47-4254-a679-57fb08398598" to be fully deleted +[AfterEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:57:17.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-6365" for this suite. +•{"msg":"PASSED [sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]","total":346,"completed":173,"skipped":2825,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch + watch on custom resource definition objects [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:57:17.341: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename crd-watch +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-watch-4505 +STEP: Waiting for a default service account to be provisioned in namespace +[It] watch on custom resource definition objects [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:57:17.530: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Creating first CR +Oct 27 14:57:19.645: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-10-27T14:57:19Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-10-27T14:57:19Z]] name:name1 resourceVersion:30465 uid:92793274-bff1-4efe-bb27-b7e30fd67e00] num:map[num1:9223372036854775807 num2:1000000]]} +STEP: Creating second CR +Oct 27 14:57:29.659: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-10-27T14:57:29Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-10-27T14:57:29Z]] name:name2 resourceVersion:30523 uid:1682d684-c727-4979-95c1-fcb6f15576aa] num:map[num1:9223372036854775807 num2:1000000]]} +STEP: Modifying first CR +Oct 27 14:57:39.673: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-10-27T14:57:19Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-10-27T14:57:39Z]] name:name1 resourceVersion:30574 uid:92793274-bff1-4efe-bb27-b7e30fd67e00] num:map[num1:9223372036854775807 num2:1000000]]} +STEP: Modifying second CR +Oct 27 14:57:49.687: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-10-27T14:57:29Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-10-27T14:57:49Z]] name:name2 resourceVersion:30650 uid:1682d684-c727-4979-95c1-fcb6f15576aa] num:map[num1:9223372036854775807 num2:1000000]]} +STEP: Deleting first CR +Oct 27 14:57:59.705: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-10-27T14:57:19Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-10-27T14:57:39Z]] name:name1 resourceVersion:30703 uid:92793274-bff1-4efe-bb27-b7e30fd67e00] num:map[num1:9223372036854775807 num2:1000000]]} +STEP: Deleting second CR +Oct 27 14:58:09.719: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-10-27T14:57:29Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-10-27T14:57:49Z]] name:name2 resourceVersion:30754 uid:1682d684-c727-4979-95c1-fcb6f15576aa] num:map[num1:9223372036854775807 num2:1000000]]} +[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:58:20.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-watch-4505" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":346,"completed":174,"skipped":2835,"failed":0} +SSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:58:20.290: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-2648 +STEP: Waiting for a default service account to be provisioned in namespace +[It] optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name cm-test-opt-del-af26b5e5-32bc-451d-ab7c-3331a8df2710 +STEP: Creating configMap with name cm-test-opt-upd-35d903c3-eb6a-4586-a85e-79c817fe679c +STEP: Creating the pod +Oct 27 14:58:20.551: INFO: The status of Pod pod-configmaps-7988c0dd-390a-4b32-b6c9-2ce5bb8f2791 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:58:22.564: INFO: The status of Pod pod-configmaps-7988c0dd-390a-4b32-b6c9-2ce5bb8f2791 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:58:24.565: INFO: The status of Pod pod-configmaps-7988c0dd-390a-4b32-b6c9-2ce5bb8f2791 is Running (Ready = true) +STEP: Deleting configmap cm-test-opt-del-af26b5e5-32bc-451d-ab7c-3331a8df2710 +STEP: Updating configmap cm-test-opt-upd-35d903c3-eb6a-4586-a85e-79c817fe679c +STEP: Creating configMap with name cm-test-opt-create-9fa3fd84-405d-42e5-a369-336c923c596d +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:58:27.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-2648" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":175,"skipped":2844,"failed":0} +SSSSSS +------------------------------ +[sig-apps] ReplicaSet + should validate Replicaset Status endpoints [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:58:27.134: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename replicaset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replicaset-3003 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should validate Replicaset Status endpoints [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Create a Replicaset +STEP: Verify that the required pods have come up. +Oct 27 14:58:27.353: INFO: Pod name sample-pod: Found 0 pods out of 1 +Oct 27 14:58:32.371: INFO: Pod name sample-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running +STEP: Getting /status +Oct 27 14:58:32.383: INFO: Replicaset test-rs has Conditions: [] +STEP: updating the Replicaset Status +Oct 27 14:58:32.411: INFO: updatedStatus.Conditions: []v1.ReplicaSetCondition{v1.ReplicaSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"E2E", Message:"Set from e2e test"}} +STEP: watching for the ReplicaSet status to be updated +Oct 27 14:58:32.423: INFO: Observed &ReplicaSet event: ADDED +Oct 27 14:58:32.423: INFO: Observed &ReplicaSet event: MODIFIED +Oct 27 14:58:32.424: INFO: Observed &ReplicaSet event: MODIFIED +Oct 27 14:58:32.424: INFO: Observed &ReplicaSet event: MODIFIED +Oct 27 14:58:32.424: INFO: Found replicaset test-rs in namespace replicaset-3003 with labels: map[name:sample-pod pod:httpd] annotations: map[] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] +Oct 27 14:58:32.424: INFO: Replicaset test-rs has an updated status +STEP: patching the Replicaset Status +Oct 27 14:58:32.424: INFO: Patch payload: {"status":{"conditions":[{"type":"StatusPatched","status":"True"}]}} +Oct 27 14:58:32.442: INFO: Patched status conditions: []v1.ReplicaSetCondition{v1.ReplicaSetCondition{Type:"StatusPatched", Status:"True", LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"", Message:""}} +STEP: watching for the Replicaset status to be patched +Oct 27 14:58:32.453: INFO: Observed &ReplicaSet event: ADDED +Oct 27 14:58:32.453: INFO: Observed &ReplicaSet event: MODIFIED +Oct 27 14:58:32.453: INFO: Observed &ReplicaSet event: MODIFIED +Oct 27 14:58:32.453: INFO: Observed &ReplicaSet event: MODIFIED +Oct 27 14:58:32.453: INFO: Observed replicaset test-rs in namespace replicaset-3003 with annotations: map[] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} +Oct 27 14:58:32.453: INFO: Observed &ReplicaSet event: MODIFIED +Oct 27 14:58:32.453: INFO: Found replicaset test-rs in namespace replicaset-3003 with labels: map[name:sample-pod pod:httpd] annotations: map[] & Conditions: {StatusPatched True 0001-01-01 00:00:00 +0000 UTC } +Oct 27 14:58:32.453: INFO: Replicaset test-rs has a patched status +[AfterEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:58:32.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replicaset-3003" for this suite. +•{"msg":"PASSED [sig-apps] ReplicaSet should validate Replicaset Status endpoints [Conformance]","total":346,"completed":176,"skipped":2850,"failed":0} +SSSS +------------------------------ +[sig-cli] Kubectl client Guestbook application + should create and stop a working application [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:58:32.487: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-162 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should create and stop a working application [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating all guestbook components +Oct 27 14:58:32.677: INFO: apiVersion: v1 +kind: Service +metadata: + name: agnhost-replica + labels: + app: agnhost + role: replica + tier: backend +spec: + ports: + - port: 6379 + selector: + app: agnhost + role: replica + tier: backend + +Oct 27 14:58:32.677: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-162 create -f -' +Oct 27 14:58:32.933: INFO: stderr: "" +Oct 27 14:58:32.933: INFO: stdout: "service/agnhost-replica created\n" +Oct 27 14:58:32.933: INFO: apiVersion: v1 +kind: Service +metadata: + name: agnhost-primary + labels: + app: agnhost + role: primary + tier: backend +spec: + ports: + - port: 6379 + targetPort: 6379 + selector: + app: agnhost + role: primary + tier: backend + +Oct 27 14:58:32.933: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-162 create -f -' +Oct 27 14:58:33.150: INFO: stderr: "" +Oct 27 14:58:33.150: INFO: stdout: "service/agnhost-primary created\n" +Oct 27 14:58:33.151: INFO: apiVersion: v1 +kind: Service +metadata: + name: frontend + labels: + app: guestbook + tier: frontend +spec: + # if your cluster supports it, uncomment the following to automatically create + # an external load-balanced IP for the frontend service. + # type: LoadBalancer + ports: + - port: 80 + selector: + app: guestbook + tier: frontend + +Oct 27 14:58:33.151: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-162 create -f -' +Oct 27 14:58:33.433: INFO: stderr: "" +Oct 27 14:58:33.433: INFO: stdout: "service/frontend created\n" +Oct 27 14:58:33.433: INFO: apiVersion: apps/v1 +kind: Deployment +metadata: + name: frontend +spec: + replicas: 3 + selector: + matchLabels: + app: guestbook + tier: frontend + template: + metadata: + labels: + app: guestbook + tier: frontend + spec: + containers: + - name: guestbook-frontend + image: k8s.gcr.io/e2e-test-images/agnhost:2.32 + args: [ "guestbook", "--backend-port", "6379" ] + resources: + requests: + cpu: 100m + memory: 100Mi + ports: + - containerPort: 80 + +Oct 27 14:58:33.434: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-162 create -f -' +Oct 27 14:58:33.629: INFO: stderr: "" +Oct 27 14:58:33.629: INFO: stdout: "deployment.apps/frontend created\n" +Oct 27 14:58:33.630: INFO: apiVersion: apps/v1 +kind: Deployment +metadata: + name: agnhost-primary +spec: + replicas: 1 + selector: + matchLabels: + app: agnhost + role: primary + tier: backend + template: + metadata: + labels: + app: agnhost + role: primary + tier: backend + spec: + containers: + - name: primary + image: k8s.gcr.io/e2e-test-images/agnhost:2.32 + args: [ "guestbook", "--http-port", "6379" ] + resources: + requests: + cpu: 100m + memory: 100Mi + ports: + - containerPort: 6379 + +Oct 27 14:58:33.630: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-162 create -f -' +Oct 27 14:58:33.832: INFO: stderr: "" +Oct 27 14:58:33.832: INFO: stdout: "deployment.apps/agnhost-primary created\n" +Oct 27 14:58:33.832: INFO: apiVersion: apps/v1 +kind: Deployment +metadata: + name: agnhost-replica +spec: + replicas: 2 + selector: + matchLabels: + app: agnhost + role: replica + tier: backend + template: + metadata: + labels: + app: agnhost + role: replica + tier: backend + spec: + containers: + - name: replica + image: k8s.gcr.io/e2e-test-images/agnhost:2.32 + args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] + resources: + requests: + cpu: 100m + memory: 100Mi + ports: + - containerPort: 6379 + +Oct 27 14:58:33.832: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-162 create -f -' +Oct 27 14:58:34.031: INFO: stderr: "" +Oct 27 14:58:34.031: INFO: stdout: "deployment.apps/agnhost-replica created\n" +STEP: validating guestbook app +Oct 27 14:58:34.031: INFO: Waiting for all frontend pods to be Running. +Oct 27 14:58:39.083: INFO: Waiting for frontend to serve content. +Oct 27 14:58:39.200: INFO: Trying to add a new entry to the guestbook. +Oct 27 14:58:39.312: INFO: Verifying that added entry can be retrieved. +STEP: using delete to clean up resources +Oct 27 14:58:39.424: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-162 delete --grace-period=0 --force -f -' +Oct 27 14:58:39.535: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Oct 27 14:58:39.535: INFO: stdout: "service \"agnhost-replica\" force deleted\n" +STEP: using delete to clean up resources +Oct 27 14:58:39.536: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-162 delete --grace-period=0 --force -f -' +Oct 27 14:58:39.653: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Oct 27 14:58:39.653: INFO: stdout: "service \"agnhost-primary\" force deleted\n" +STEP: using delete to clean up resources +Oct 27 14:58:39.653: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-162 delete --grace-period=0 --force -f -' +Oct 27 14:58:39.772: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Oct 27 14:58:39.772: INFO: stdout: "service \"frontend\" force deleted\n" +STEP: using delete to clean up resources +Oct 27 14:58:39.772: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-162 delete --grace-period=0 --force -f -' +Oct 27 14:58:39.876: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Oct 27 14:58:39.876: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" +STEP: using delete to clean up resources +Oct 27 14:58:39.876: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-162 delete --grace-period=0 --force -f -' +Oct 27 14:58:39.974: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Oct 27 14:58:39.974: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" +STEP: using delete to clean up resources +Oct 27 14:58:39.974: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-162 delete --grace-period=0 --force -f -' +Oct 27 14:58:40.076: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Oct 27 14:58:40.076: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:58:40.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-162" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":346,"completed":177,"skipped":2854,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a configMap. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:58:40.112: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename resourcequota +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-4189 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should create a ResourceQuota and capture the life of a configMap. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Counting existing ResourceQuota +STEP: Creating a ResourceQuota +STEP: Ensuring resource quota status is calculated +STEP: Creating a ConfigMap +STEP: Ensuring resource quota status captures configMap creation +STEP: Deleting a ConfigMap +STEP: Ensuring resource quota status released usage +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:59:08.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-4189" for this suite. +•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":346,"completed":178,"skipped":2918,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should be able to deny pod and configmap creation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:59:08.440: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-1093 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 27 14:59:09.183: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943549, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943549, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943549, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943549, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 27 14:59:11.197: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943549, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943549, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943549, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943549, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 27 14:59:14.218: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should be able to deny pod and configmap creation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Registering the webhook via the AdmissionRegistration API +STEP: create a pod that should be denied by the webhook +STEP: create a pod that causes the webhook to hang +STEP: create a configmap that should be denied by the webhook +STEP: create a configmap that should be admitted by the webhook +STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook +STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook +STEP: create a namespace that bypass the webhook +STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:59:25.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-1093" for this suite. +STEP: Destroying namespace "webhook-1093-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":346,"completed":179,"skipped":2934,"failed":0} +SSS +------------------------------ +[sig-node] Probing container + with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:59:25.345: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-probe +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-1675 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 +[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:00:25.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-1675" for this suite. +•{"msg":"PASSED [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":346,"completed":180,"skipped":2937,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:00:25.591: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-4381 +STEP: Waiting for a default service account to be provisioned in namespace +[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir volume type on node default medium +Oct 27 15:00:25.796: INFO: Waiting up to 5m0s for pod "pod-bceb00c4-daf0-4e35-a3e6-3410c813d0c0" in namespace "emptydir-4381" to be "Succeeded or Failed" +Oct 27 15:00:25.807: INFO: Pod "pod-bceb00c4-daf0-4e35-a3e6-3410c813d0c0": Phase="Pending", Reason="", readiness=false. Elapsed: 11.168135ms +Oct 27 15:00:27.820: INFO: Pod "pod-bceb00c4-daf0-4e35-a3e6-3410c813d0c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023865745s +Oct 27 15:00:29.832: INFO: Pod "pod-bceb00c4-daf0-4e35-a3e6-3410c813d0c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035795203s +STEP: Saw pod success +Oct 27 15:00:29.832: INFO: Pod "pod-bceb00c4-daf0-4e35-a3e6-3410c813d0c0" satisfied condition "Succeeded or Failed" +Oct 27 15:00:29.843: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod pod-bceb00c4-daf0-4e35-a3e6-3410c813d0c0 container test-container: +STEP: delete the pod +Oct 27 15:00:29.927: INFO: Waiting for pod pod-bceb00c4-daf0-4e35-a3e6-3410c813d0c0 to disappear +Oct 27 15:00:29.938: INFO: Pod pod-bceb00c4-daf0-4e35-a3e6-3410c813d0c0 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:00:29.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-4381" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":181,"skipped":2988,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:00:29.971: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename statefulset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-5216 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:92 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:107 +STEP: Creating service test in namespace statefulset-5216 +[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Initializing watcher for selector baz=blah,foo=bar +STEP: Creating stateful set ss in namespace statefulset-5216 +STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5216 +Oct 27 15:00:30.211: INFO: Found 0 stateful pods, waiting for 1 +Oct 27 15:00:40.224: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod +Oct 27 15:00:40.235: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5216 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Oct 27 15:00:40.765: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Oct 27 15:00:40.765: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Oct 27 15:00:40.765: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Oct 27 15:00:40.777: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true +Oct 27 15:00:50.789: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false +Oct 27 15:00:50.789: INFO: Waiting for statefulset status.replicas updated to 0 +Oct 27 15:00:50.834: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.99999951s +Oct 27 15:00:51.846: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.988943441s +Oct 27 15:00:52.858: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.977198684s +Oct 27 15:00:53.871: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.965103481s +Oct 27 15:00:54.883: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.951771611s +Oct 27 15:00:55.897: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.940287317s +Oct 27 15:00:56.909: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.926427938s +Oct 27 15:00:57.921: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.914536003s +Oct 27 15:00:58.937: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.902651026s +Oct 27 15:00:59.949: INFO: Verifying statefulset ss doesn't scale past 1 for another 886.035608ms +STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5216 +Oct 27 15:01:00.961: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5216 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:01:01.466: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Oct 27 15:01:01.466: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Oct 27 15:01:01.466: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Oct 27 15:01:01.479: INFO: Found 1 stateful pods, waiting for 3 +Oct 27 15:01:11.493: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +Oct 27 15:01:11.493: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true +Oct 27 15:01:11.493: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true +STEP: Verifying that stateful set ss was scaled up in order +STEP: Scale down will halt with unhealthy stateful pod +Oct 27 15:01:11.517: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5216 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Oct 27 15:01:12.066: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Oct 27 15:01:12.066: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Oct 27 15:01:12.066: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Oct 27 15:01:12.066: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5216 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Oct 27 15:01:12.553: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Oct 27 15:01:12.553: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Oct 27 15:01:12.553: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Oct 27 15:01:12.553: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5216 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Oct 27 15:01:13.068: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Oct 27 15:01:13.068: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Oct 27 15:01:13.068: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Oct 27 15:01:13.068: INFO: Waiting for statefulset status.replicas updated to 0 +Oct 27 15:01:13.079: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 +Oct 27 15:01:23.106: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false +Oct 27 15:01:23.106: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false +Oct 27 15:01:23.106: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false +Oct 27 15:01:23.142: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999495s +Oct 27 15:01:24.155: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.98618145s +Oct 27 15:01:25.170: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.972792408s +Oct 27 15:01:26.184: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.958811548s +Oct 27 15:01:27.196: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.945579213s +Oct 27 15:01:28.208: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.933248919s +Oct 27 15:01:29.231: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.920328812s +Oct 27 15:01:30.252: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.898736854s +Oct 27 15:01:31.273: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.877137905s +Oct 27 15:01:32.294: INFO: Verifying statefulset ss doesn't scale past 3 for another 856.10058ms +STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5216 +Oct 27 15:01:33.308: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5216 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:01:33.799: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Oct 27 15:01:33.799: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Oct 27 15:01:33.799: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Oct 27 15:01:33.799: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5216 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:01:34.293: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Oct 27 15:01:34.294: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Oct 27 15:01:34.294: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Oct 27 15:01:34.294: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5216 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:01:35.185: INFO: rc: 1 +Oct 27 15:01:35.186: INFO: Waiting 10s to retry failed RunHostCmd: error running /go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5216 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +error: Internal error occurred: error executing command in container: failed to exec in container: failed to create exec "7b337af7c20e884261b7367d5838430227072be86b11b83aa210dcd2a285f36f": task ae64d0a91b63dba9ffc923ed679bc1da3aad00468d9e64e84aacba01fa2d675c not found: not found + +error: +exit status 1 +Oct 27 15:01:45.187: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5216 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:01:45.319: INFO: rc: 1 +Oct 27 15:01:45.319: INFO: Waiting 10s to retry failed RunHostCmd: error running /go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5216 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 +Oct 27 15:01:55.320: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5216 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:01:55.450: INFO: rc: 1 +Oct 27 15:01:55.450: INFO: Waiting 10s to retry failed RunHostCmd: error running /go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5216 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 +Oct 27 15:02:05.451: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5216 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:02:05.570: INFO: rc: 1 +Oct 27 15:02:05.570: INFO: Waiting 10s to retry failed RunHostCmd: error running /go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5216 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 +Oct 27 15:02:15.571: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5216 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:02:15.702: INFO: rc: 1 +Oct 27 15:02:15.702: INFO: Waiting 10s to retry failed RunHostCmd: error running /go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5216 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 +Oct 27 15:02:25.703: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5216 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:02:25.813: INFO: rc: 1 +Oct 27 15:02:25.813: INFO: Waiting 10s to retry failed RunHostCmd: error running /go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5216 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 +Oct 27 15:02:35.813: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5216 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:02:35.917: INFO: rc: 1 +Oct 27 15:02:35.917: INFO: Waiting 10s to retry failed RunHostCmd: error running /go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5216 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 +Oct 27 15:02:45.919: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5216 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:02:46.033: INFO: rc: 1 +Oct 27 15:02:46.033: INFO: Waiting 10s to retry failed RunHostCmd: error running /go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5216 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 +Oct 27 15:02:56.034: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5216 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:02:56.415: INFO: rc: 1 +Oct 27 15:02:56.415: INFO: Waiting 10s to retry failed RunHostCmd: error running /go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5216 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 +Oct 27 15:03:06.416: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5216 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:03:06.523: INFO: rc: 1 +Oct 27 15:03:06.523: INFO: Waiting 10s to retry failed RunHostCmd: error running /go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5216 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 +Oct 27 15:03:16.525: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5216 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:03:16.623: INFO: rc: 1 +Oct 27 15:03:16.623: INFO: Waiting 10s to retry failed RunHostCmd: error running /go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5216 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 +Oct 27 15:03:26.625: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5216 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:03:26.725: INFO: rc: 1 +Oct 27 15:03:26.725: INFO: Waiting 10s to retry failed RunHostCmd: error running /go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5216 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 +Oct 27 15:03:36.727: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5216 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:03:36.827: INFO: rc: 1 +Oct 27 15:03:36.827: INFO: Waiting 10s to retry failed RunHostCmd: error running /go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5216 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 +Oct 27 15:03:46.828: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5216 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:03:46.935: INFO: rc: 1 +Oct 27 15:03:46.935: INFO: Waiting 10s to retry failed RunHostCmd: error running /go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5216 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 +Oct 27 15:03:56.937: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5216 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:03:57.036: INFO: rc: 1 +Oct 27 15:03:57.036: INFO: Waiting 10s to retry failed RunHostCmd: error running /go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5216 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 +Oct 27 15:04:07.038: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5216 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:04:07.149: INFO: rc: 1 +Oct 27 15:04:07.149: INFO: Waiting 10s to retry failed RunHostCmd: error running /go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5216 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 +Oct 27 15:04:17.149: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5216 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:04:17.257: INFO: rc: 1 +Oct 27 15:04:17.257: INFO: Waiting 10s to retry failed RunHostCmd: error running /go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5216 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 +Oct 27 15:04:27.259: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5216 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:04:27.360: INFO: rc: 1 +Oct 27 15:04:27.361: INFO: Waiting 10s to retry failed RunHostCmd: error running /go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5216 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 +Oct 27 15:04:37.361: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5216 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:04:37.468: INFO: rc: 1 +Oct 27 15:04:37.469: INFO: Waiting 10s to retry failed RunHostCmd: error running /go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5216 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 +Oct 27 15:04:47.471: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5216 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:04:47.580: INFO: rc: 1 +Oct 27 15:04:47.580: INFO: Waiting 10s to retry failed RunHostCmd: error running /go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5216 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 +Oct 27 15:04:57.582: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5216 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:04:57.689: INFO: rc: 1 +Oct 27 15:04:57.689: INFO: Waiting 10s to retry failed RunHostCmd: error running /go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5216 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 +Oct 27 15:05:07.691: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5216 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:05:07.806: INFO: rc: 1 +Oct 27 15:05:07.806: INFO: Waiting 10s to retry failed RunHostCmd: error running /go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5216 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 +Oct 27 15:05:17.808: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5216 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:05:17.910: INFO: rc: 1 +Oct 27 15:05:17.910: INFO: Waiting 10s to retry failed RunHostCmd: error running /go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5216 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 +Oct 27 15:05:27.911: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5216 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:05:28.021: INFO: rc: 1 +Oct 27 15:05:28.021: INFO: Waiting 10s to retry failed RunHostCmd: error running /go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5216 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 +Oct 27 15:05:38.024: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5216 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:05:38.124: INFO: rc: 1 +Oct 27 15:05:38.124: INFO: Waiting 10s to retry failed RunHostCmd: error running /go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5216 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 +Oct 27 15:05:48.126: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5216 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:05:48.242: INFO: rc: 1 +Oct 27 15:05:48.242: INFO: Waiting 10s to retry failed RunHostCmd: error running /go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5216 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 +Oct 27 15:05:58.243: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5216 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:05:58.344: INFO: rc: 1 +Oct 27 15:05:58.344: INFO: Waiting 10s to retry failed RunHostCmd: error running /go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5216 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 +Oct 27 15:06:08.346: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5216 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:06:08.459: INFO: rc: 1 +Oct 27 15:06:08.459: INFO: Waiting 10s to retry failed RunHostCmd: error running /go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5216 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 +Oct 27 15:06:18.463: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5216 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:06:18.565: INFO: rc: 1 +Oct 27 15:06:18.565: INFO: Waiting 10s to retry failed RunHostCmd: error running /go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5216 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 +Oct 27 15:06:28.568: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5216 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:06:28.673: INFO: rc: 1 +Oct 27 15:06:28.673: INFO: Waiting 10s to retry failed RunHostCmd: error running /go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5216 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 +Oct 27 15:06:38.677: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-5216 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:06:38.786: INFO: rc: 1 +Oct 27 15:06:38.786: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: +Oct 27 15:06:38.786: INFO: Scaling statefulset ss to 0 +STEP: Verifying that stateful set ss was scaled down in reverse order +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118 +Oct 27 15:06:38.822: INFO: Deleting all statefulset in ns statefulset-5216 +Oct 27 15:06:38.834: INFO: Scaling statefulset ss to 0 +Oct 27 15:06:38.873: INFO: Waiting for statefulset status.replicas updated to 0 +Oct 27 15:06:38.884: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:06:38.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-5216" for this suite. + +• [SLOW TEST:368.982 seconds] +[sig-apps] StatefulSet +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97 + Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":346,"completed":182,"skipped":3014,"failed":0} +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide container's cpu limit [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:06:38.953: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-8787 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should provide container's cpu limit [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 27 15:06:39.210: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c9d72508-c566-4413-8e62-8d219bacbc39" in namespace "downward-api-8787" to be "Succeeded or Failed" +Oct 27 15:06:39.221: INFO: Pod "downwardapi-volume-c9d72508-c566-4413-8e62-8d219bacbc39": Phase="Pending", Reason="", readiness=false. Elapsed: 11.07596ms +Oct 27 15:06:41.236: INFO: Pod "downwardapi-volume-c9d72508-c566-4413-8e62-8d219bacbc39": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025792492s +Oct 27 15:06:43.249: INFO: Pod "downwardapi-volume-c9d72508-c566-4413-8e62-8d219bacbc39": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038971067s +STEP: Saw pod success +Oct 27 15:06:43.249: INFO: Pod "downwardapi-volume-c9d72508-c566-4413-8e62-8d219bacbc39" satisfied condition "Succeeded or Failed" +Oct 27 15:06:43.261: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod downwardapi-volume-c9d72508-c566-4413-8e62-8d219bacbc39 container client-container: +STEP: delete the pod +Oct 27 15:06:43.345: INFO: Waiting for pod downwardapi-volume-c9d72508-c566-4413-8e62-8d219bacbc39 to disappear +Oct 27 15:06:43.356: INFO: Pod downwardapi-volume-c9d72508-c566-4413-8e62-8d219bacbc39 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:06:43.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-8787" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":346,"completed":183,"skipped":3032,"failed":0} +SSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a replica set. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:06:43.390: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename resourcequota +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-5868 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should create a ResourceQuota and capture the life of a replica set. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Counting existing ResourceQuota +STEP: Creating a ResourceQuota +STEP: Ensuring resource quota status is calculated +STEP: Creating a ReplicaSet +STEP: Ensuring resource quota status captures replicaset creation +STEP: Deleting a ReplicaSet +STEP: Ensuring resource quota status released usage +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:06:54.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-5868" for this suite. +•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":346,"completed":184,"skipped":3035,"failed":0} +SSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:06:54.705: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-8982 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0644 on tmpfs +Oct 27 15:06:54.912: INFO: Waiting up to 5m0s for pod "pod-3dd0255f-a76f-4136-89b4-771b55131d5b" in namespace "emptydir-8982" to be "Succeeded or Failed" +Oct 27 15:06:54.932: INFO: Pod "pod-3dd0255f-a76f-4136-89b4-771b55131d5b": Phase="Pending", Reason="", readiness=false. Elapsed: 19.145098ms +Oct 27 15:06:56.944: INFO: Pod "pod-3dd0255f-a76f-4136-89b4-771b55131d5b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031401333s +Oct 27 15:06:58.956: INFO: Pod "pod-3dd0255f-a76f-4136-89b4-771b55131d5b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044029953s +STEP: Saw pod success +Oct 27 15:06:58.957: INFO: Pod "pod-3dd0255f-a76f-4136-89b4-771b55131d5b" satisfied condition "Succeeded or Failed" +Oct 27 15:06:58.969: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod pod-3dd0255f-a76f-4136-89b4-771b55131d5b container test-container: +STEP: delete the pod +Oct 27 15:06:59.042: INFO: Waiting for pod pod-3dd0255f-a76f-4136-89b4-771b55131d5b to disappear +Oct 27 15:06:59.053: INFO: Pod pod-3dd0255f-a76f-4136-89b4-771b55131d5b no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:06:59.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-8982" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":185,"skipped":3040,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-auth] ServiceAccounts + should allow opting out of API token automount [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:06:59.122: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename svcaccounts +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in svcaccounts-3021 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should allow opting out of API token automount [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: getting the auto-created API token +Oct 27 15:06:59.893: INFO: created pod pod-service-account-defaultsa +Oct 27 15:06:59.893: INFO: pod pod-service-account-defaultsa service account token volume mount: true +Oct 27 15:06:59.907: INFO: created pod pod-service-account-mountsa +Oct 27 15:06:59.907: INFO: pod pod-service-account-mountsa service account token volume mount: true +Oct 27 15:06:59.922: INFO: created pod pod-service-account-nomountsa +Oct 27 15:06:59.922: INFO: pod pod-service-account-nomountsa service account token volume mount: false +Oct 27 15:06:59.938: INFO: created pod pod-service-account-defaultsa-mountspec +Oct 27 15:06:59.938: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true +Oct 27 15:06:59.952: INFO: created pod pod-service-account-mountsa-mountspec +Oct 27 15:06:59.953: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true +Oct 27 15:06:59.967: INFO: created pod pod-service-account-nomountsa-mountspec +Oct 27 15:06:59.968: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true +Oct 27 15:06:59.983: INFO: created pod pod-service-account-defaultsa-nomountspec +Oct 27 15:06:59.983: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false +Oct 27 15:07:00.016: INFO: created pod pod-service-account-mountsa-nomountspec +Oct 27 15:07:00.016: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false +Oct 27 15:07:00.032: INFO: created pod pod-service-account-nomountsa-nomountspec +Oct 27 15:07:00.032: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false +[AfterEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:07:00.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svcaccounts-3021" for this suite. +•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":346,"completed":186,"skipped":3099,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should unconditionally reject operations on fail closed webhook [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:07:00.066: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-1310 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 27 15:07:00.862: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944020, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944020, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944020, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944020, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 27 15:07:02.875: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944020, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944020, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944020, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944020, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 27 15:07:05.894: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should unconditionally reject operations on fail closed webhook [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API +STEP: create a namespace for the webhook +STEP: create a configmap should be unconditionally rejected by the webhook +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:07:06.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-1310" for this suite. +STEP: Destroying namespace "webhook-1310-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":346,"completed":187,"skipped":3149,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:07:06.332: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-8331 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating service in namespace services-8331 +Oct 27 15:07:06.553: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:07:08.565: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:07:10.564: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true) +Oct 27 15:07:10.575: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-8331 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' +Oct 27 15:07:11.129: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" +Oct 27 15:07:11.129: INFO: stdout: "iptables" +Oct 27 15:07:11.129: INFO: proxyMode: iptables +Oct 27 15:07:11.147: INFO: Waiting for pod kube-proxy-mode-detector to disappear +Oct 27 15:07:11.161: INFO: Pod kube-proxy-mode-detector no longer exists +STEP: creating service affinity-clusterip-timeout in namespace services-8331 +STEP: creating replication controller affinity-clusterip-timeout in namespace services-8331 +I1027 15:07:11.200693 5768 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-8331, replica count: 3 +I1027 15:07:14.252262 5768 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Oct 27 15:07:14.277: INFO: Creating new exec pod +Oct 27 15:07:19.323: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-8331 exec execpod-affinitymz5wd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80' +Oct 27 15:07:19.870: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-timeout 80\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\n" +Oct 27 15:07:19.870: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 15:07:19.870: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-8331 exec execpod-affinitymz5wd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.66.143.186 80' +Oct 27 15:07:20.354: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 100.66.143.186 80\nConnection to 100.66.143.186 80 port [tcp/http] succeeded!\n" +Oct 27 15:07:20.354: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 15:07:20.354: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-8331 exec execpod-affinitymz5wd -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://100.66.143.186:80/ ; done' +Oct 27 15:07:20.922: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.143.186:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.143.186:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.143.186:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.143.186:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.143.186:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.143.186:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.143.186:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.143.186:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.143.186:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.143.186:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.143.186:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.143.186:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.143.186:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.143.186:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.143.186:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.143.186:80/\n" +Oct 27 15:07:20.922: INFO: stdout: "\naffinity-clusterip-timeout-4jnxn\naffinity-clusterip-timeout-4jnxn\naffinity-clusterip-timeout-4jnxn\naffinity-clusterip-timeout-4jnxn\naffinity-clusterip-timeout-4jnxn\naffinity-clusterip-timeout-4jnxn\naffinity-clusterip-timeout-4jnxn\naffinity-clusterip-timeout-4jnxn\naffinity-clusterip-timeout-4jnxn\naffinity-clusterip-timeout-4jnxn\naffinity-clusterip-timeout-4jnxn\naffinity-clusterip-timeout-4jnxn\naffinity-clusterip-timeout-4jnxn\naffinity-clusterip-timeout-4jnxn\naffinity-clusterip-timeout-4jnxn\naffinity-clusterip-timeout-4jnxn" +Oct 27 15:07:20.922: INFO: Received response from host: affinity-clusterip-timeout-4jnxn +Oct 27 15:07:20.922: INFO: Received response from host: affinity-clusterip-timeout-4jnxn +Oct 27 15:07:20.922: INFO: Received response from host: affinity-clusterip-timeout-4jnxn +Oct 27 15:07:20.922: INFO: Received response from host: affinity-clusterip-timeout-4jnxn +Oct 27 15:07:20.922: INFO: Received response from host: affinity-clusterip-timeout-4jnxn +Oct 27 15:07:20.922: INFO: Received response from host: affinity-clusterip-timeout-4jnxn +Oct 27 15:07:20.922: INFO: Received response from host: affinity-clusterip-timeout-4jnxn +Oct 27 15:07:20.922: INFO: Received response from host: affinity-clusterip-timeout-4jnxn +Oct 27 15:07:20.922: INFO: Received response from host: affinity-clusterip-timeout-4jnxn +Oct 27 15:07:20.922: INFO: Received response from host: affinity-clusterip-timeout-4jnxn +Oct 27 15:07:20.922: INFO: Received response from host: affinity-clusterip-timeout-4jnxn +Oct 27 15:07:20.922: INFO: Received response from host: affinity-clusterip-timeout-4jnxn +Oct 27 15:07:20.922: INFO: Received response from host: affinity-clusterip-timeout-4jnxn +Oct 27 15:07:20.922: INFO: Received response from host: affinity-clusterip-timeout-4jnxn +Oct 27 15:07:20.922: INFO: Received response from host: affinity-clusterip-timeout-4jnxn +Oct 27 15:07:20.922: INFO: Received response from host: affinity-clusterip-timeout-4jnxn +Oct 27 15:07:20.922: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-8331 exec execpod-affinitymz5wd -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://100.66.143.186:80/' +Oct 27 15:07:21.390: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://100.66.143.186:80/\n" +Oct 27 15:07:21.390: INFO: stdout: "affinity-clusterip-timeout-4jnxn" +Oct 27 15:07:41.391: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-8331 exec execpod-affinitymz5wd -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://100.66.143.186:80/' +Oct 27 15:07:41.962: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://100.66.143.186:80/\n" +Oct 27 15:07:41.962: INFO: stdout: "affinity-clusterip-timeout-4jnxn" +Oct 27 15:08:01.963: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-8331 exec execpod-affinitymz5wd -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://100.66.143.186:80/' +Oct 27 15:08:02.501: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://100.66.143.186:80/\n" +Oct 27 15:08:02.501: INFO: stdout: "affinity-clusterip-timeout-4jnxn" +Oct 27 15:08:22.503: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-8331 exec execpod-affinitymz5wd -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://100.66.143.186:80/' +Oct 27 15:08:23.112: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://100.66.143.186:80/\n" +Oct 27 15:08:23.112: INFO: stdout: "affinity-clusterip-timeout-4jnxn" +Oct 27 15:08:43.115: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-8331 exec execpod-affinitymz5wd -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://100.66.143.186:80/' +Oct 27 15:08:43.637: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://100.66.143.186:80/\n" +Oct 27 15:08:43.637: INFO: stdout: "affinity-clusterip-timeout-x8wht" +Oct 27 15:08:43.637: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-8331, will wait for the garbage collector to delete the pods +Oct 27 15:08:43.733: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 13.70366ms +Oct 27 15:08:43.834: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 101.147147ms +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:08:47.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-8331" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":346,"completed":188,"skipped":3190,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Networking Granular Checks: Pods + should function for intra-pod communication: udp [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Networking + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:08:47.403: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename pod-network-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pod-network-test-4708 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should function for intra-pod communication: udp [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Performing setup for networking test in namespace pod-network-test-4708 +STEP: creating a selector +STEP: Creating the service pods in kubernetes +Oct 27 15:08:47.584: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +Oct 27 15:08:47.673: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:08:49.685: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:08:51.686: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 15:08:53.686: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 15:08:55.687: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 15:08:57.686: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 15:08:59.713: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 15:09:01.686: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 15:09:03.713: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 15:09:05.686: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 15:09:07.686: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 15:09:09.688: INFO: The status of Pod netserver-0 is Running (Ready = true) +Oct 27 15:09:09.711: INFO: The status of Pod netserver-1 is Running (Ready = true) +STEP: Creating test pods +Oct 27 15:09:13.777: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 +Oct 27 15:09:13.777: INFO: Breadth first check of 100.96.0.87 on host 10.250.0.5... +Oct 27 15:09:13.788: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://100.96.1.229:9080/dial?request=hostname&protocol=udp&host=100.96.0.87&port=8081&tries=1'] Namespace:pod-network-test-4708 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 15:09:13.788: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 15:09:14.204: INFO: Waiting for responses: map[] +Oct 27 15:09:14.204: INFO: reached 100.96.0.87 after 0/1 tries +Oct 27 15:09:14.204: INFO: Breadth first check of 100.96.1.228 on host 10.250.0.4... +Oct 27 15:09:14.216: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://100.96.1.229:9080/dial?request=hostname&protocol=udp&host=100.96.1.228&port=8081&tries=1'] Namespace:pod-network-test-4708 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 15:09:14.216: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 15:09:14.602: INFO: Waiting for responses: map[] +Oct 27 15:09:14.602: INFO: reached 100.96.1.228 after 0/1 tries +Oct 27 15:09:14.602: INFO: Going to retry 0 out of 2 pods.... +[AfterEach] [sig-network] Networking + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:09:14.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pod-network-test-4708" for this suite. +•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":346,"completed":189,"skipped":3237,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Security Context + should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:09:14.639: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename security-context +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-1367 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser +Oct 27 15:09:14.842: INFO: Waiting up to 5m0s for pod "security-context-4c934567-b3ce-41b0-8e32-c722413bed58" in namespace "security-context-1367" to be "Succeeded or Failed" +Oct 27 15:09:14.853: INFO: Pod "security-context-4c934567-b3ce-41b0-8e32-c722413bed58": Phase="Pending", Reason="", readiness=false. Elapsed: 10.873698ms +Oct 27 15:09:16.865: INFO: Pod "security-context-4c934567-b3ce-41b0-8e32-c722413bed58": Phase="Running", Reason="", readiness=true. Elapsed: 2.023211173s +Oct 27 15:09:18.878: INFO: Pod "security-context-4c934567-b3ce-41b0-8e32-c722413bed58": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036158797s +STEP: Saw pod success +Oct 27 15:09:18.878: INFO: Pod "security-context-4c934567-b3ce-41b0-8e32-c722413bed58" satisfied condition "Succeeded or Failed" +Oct 27 15:09:18.890: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod security-context-4c934567-b3ce-41b0-8e32-c722413bed58 container test-container: +STEP: delete the pod +Oct 27 15:09:18.957: INFO: Waiting for pod security-context-4c934567-b3ce-41b0-8e32-c722413bed58 to disappear +Oct 27 15:09:18.968: INFO: Pod security-context-4c934567-b3ce-41b0-8e32-c722413bed58 no longer exists +[AfterEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:09:18.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "security-context-1367" for this suite. +•{"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":346,"completed":190,"skipped":3253,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + should include custom resource definition resources in discovery documents [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:09:19.003: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename custom-resource-definition +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in custom-resource-definition-3620 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should include custom resource definition resources in discovery documents [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: fetching the /apis discovery document +STEP: finding the apiextensions.k8s.io API group in the /apis discovery document +STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document +STEP: fetching the /apis/apiextensions.k8s.io discovery document +STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document +STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document +STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:09:19.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "custom-resource-definition-3620" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":346,"completed":191,"skipped":3281,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:09:19.250: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename gc +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-6687 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the deployment +STEP: Wait for the Deployment to create new ReplicaSet +STEP: delete the deployment +STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs +STEP: Gathering metrics +Oct 27 15:09:20.196: INFO: For apiserver_request_total: +For apiserver_request_latency_seconds: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +W1027 15:09:20.196679 5768 metrics_grabber.go:151] Can't find kube-controller-manager pod. Grabbing metrics from kube-controller-manager is disabled. +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:09:20.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-6687" for this suite. +•{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":346,"completed":192,"skipped":3295,"failed":0} + +------------------------------ +[sig-storage] Projected downwardAPI + should update labels on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:09:20.221: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-8364 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should update labels on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating the pod +Oct 27 15:09:20.440: INFO: The status of Pod labelsupdate9a099cf0-2e1a-440e-b628-aff9dd935cc6 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:09:22.453: INFO: The status of Pod labelsupdate9a099cf0-2e1a-440e-b628-aff9dd935cc6 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:09:24.452: INFO: The status of Pod labelsupdate9a099cf0-2e1a-440e-b628-aff9dd935cc6 is Running (Ready = true) +Oct 27 15:09:25.077: INFO: Successfully updated pod "labelsupdate9a099cf0-2e1a-440e-b628-aff9dd935cc6" +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:09:29.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-8364" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":346,"completed":193,"skipped":3295,"failed":0} +SS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a secret. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:09:29.261: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename resourcequota +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-9813 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should create a ResourceQuota and capture the life of a secret. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Discovering how many secrets are in namespace by default +STEP: Counting existing ResourceQuota +STEP: Creating a ResourceQuota +STEP: Ensuring resource quota status is calculated +STEP: Creating a Secret +STEP: Ensuring resource quota status captures secret creation +STEP: Deleting a secret +STEP: Ensuring resource quota status released usage +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:09:46.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-9813" for this suite. +•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":346,"completed":194,"skipped":3297,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-apps] ReplicaSet + Replace and Patch tests [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:09:46.596: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename replicaset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replicaset-5022 +STEP: Waiting for a default service account to be provisioned in namespace +[It] Replace and Patch tests [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:09:46.824: INFO: Pod name sample-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running +STEP: Scaling up "test-rs" replicaset +Oct 27 15:09:50.873: INFO: Updating replica set "test-rs" +STEP: patching the ReplicaSet +Oct 27 15:09:50.899: INFO: observed ReplicaSet test-rs in namespace replicaset-5022 with ReadyReplicas 1, AvailableReplicas 1 +Oct 27 15:09:50.899: INFO: observed ReplicaSet test-rs in namespace replicaset-5022 with ReadyReplicas 1, AvailableReplicas 1 +Oct 27 15:09:50.921: INFO: observed ReplicaSet test-rs in namespace replicaset-5022 with ReadyReplicas 1, AvailableReplicas 1 +Oct 27 15:09:50.929: INFO: observed ReplicaSet test-rs in namespace replicaset-5022 with ReadyReplicas 1, AvailableReplicas 1 +Oct 27 15:09:52.952: INFO: observed ReplicaSet test-rs in namespace replicaset-5022 with ReadyReplicas 2, AvailableReplicas 2 +Oct 27 15:09:53.297: INFO: observed Replicaset test-rs in namespace replicaset-5022 with ReadyReplicas 3 found true +[AfterEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:09:53.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replicaset-5022" for this suite. +•{"msg":"PASSED [sig-apps] ReplicaSet Replace and Patch tests [Conformance]","total":346,"completed":195,"skipped":3308,"failed":0} +SS +------------------------------ +[sig-storage] Projected downwardAPI + should provide container's cpu limit [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:09:53.331: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-6964 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should provide container's cpu limit [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 27 15:09:53.535: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1e84610a-f0d0-4eb2-b76f-d5e3a30960f7" in namespace "projected-6964" to be "Succeeded or Failed" +Oct 27 15:09:53.546: INFO: Pod "downwardapi-volume-1e84610a-f0d0-4eb2-b76f-d5e3a30960f7": Phase="Pending", Reason="", readiness=false. Elapsed: 11.269844ms +Oct 27 15:09:55.560: INFO: Pod "downwardapi-volume-1e84610a-f0d0-4eb2-b76f-d5e3a30960f7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02536933s +Oct 27 15:09:57.573: INFO: Pod "downwardapi-volume-1e84610a-f0d0-4eb2-b76f-d5e3a30960f7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038432443s +STEP: Saw pod success +Oct 27 15:09:57.573: INFO: Pod "downwardapi-volume-1e84610a-f0d0-4eb2-b76f-d5e3a30960f7" satisfied condition "Succeeded or Failed" +Oct 27 15:09:57.585: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod downwardapi-volume-1e84610a-f0d0-4eb2-b76f-d5e3a30960f7 container client-container: +STEP: delete the pod +Oct 27 15:09:57.695: INFO: Waiting for pod downwardapi-volume-1e84610a-f0d0-4eb2-b76f-d5e3a30960f7 to disappear +Oct 27 15:09:57.707: INFO: Pod downwardapi-volume-1e84610a-f0d0-4eb2-b76f-d5e3a30960f7 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:09:57.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-6964" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":346,"completed":196,"skipped":3310,"failed":0} +SSSSSSS +------------------------------ +[sig-network] IngressClass API + should support creating IngressClass API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] IngressClass API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:09:57.740: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename ingressclass +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in ingressclass-9479 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] IngressClass API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingressclass.go:148 +[It] should support creating IngressClass API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: getting /apis +STEP: getting /apis/networking.k8s.io +STEP: getting /apis/networking.k8s.iov1 +STEP: creating +STEP: getting +STEP: listing +STEP: watching +Oct 27 15:09:58.369: INFO: starting watch +STEP: patching +STEP: updating +Oct 27 15:09:58.404: INFO: waiting for watch events with expected annotations +Oct 27 15:09:58.404: INFO: saw patched and updated annotations +STEP: deleting +STEP: deleting a collection +[AfterEach] [sig-network] IngressClass API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:09:58.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "ingressclass-9479" for this suite. +•{"msg":"PASSED [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]","total":346,"completed":197,"skipped":3317,"failed":0} + +------------------------------ +[sig-node] Events + should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Events + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:09:58.500: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename events +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in events-6443 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pod +STEP: submitting the pod to kubernetes +STEP: verifying the pod is in kubernetes +STEP: retrieving the pod +Oct 27 15:10:02.757: INFO: &Pod{ObjectMeta:{send-events-2ac8277f-2d6c-4a15-8cd7-4e54a0ce1460 events-6443 e5b0c2ff-7809-4cab-95a8-ab74402ad994 35720 0 2021-10-27 15:09:58 +0000 UTC map[name:foo time:689051033] map[cni.projectcalico.org/containerID:ed071307b6a856944d55afb43f3e833fb1dff879bbbcb47ddecbbfa706b38505 cni.projectcalico.org/podIP:100.96.1.236/32 cni.projectcalico.org/podIPs:100.96.1.236/32 kubernetes.io/psp:e2e-test-privileged-psp] [] [] [{e2e.test Update v1 2021-10-27 15:09:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-27 15:09:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-27 15:10:01 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.1.236\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-hlbpg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:p,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmgxs-skc.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hlbpg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:09:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:10:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:10:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:09:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.4,PodIP:100.96.1.236,StartTime:2021-10-27 15:09:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-27 15:10:00 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:containerd://d27e96dd76a910c3bfff06a16160d66f6da8936cc9deb5622947505cbc4642c2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.1.236,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + +STEP: checking for scheduler event about the pod +Oct 27 15:10:04.770: INFO: Saw scheduler event for our pod. +STEP: checking for kubelet event about the pod +Oct 27 15:10:06.783: INFO: Saw kubelet event for our pod. +STEP: deleting the pod +[AfterEach] [sig-node] Events + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:10:06.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "events-6443" for this suite. +•{"msg":"PASSED [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":346,"completed":198,"skipped":3317,"failed":0} +SS +------------------------------ +[sig-node] Pods + should delete a collection of pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:10:06.830: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename pods +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-718 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:188 +[It] should delete a collection of pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Create set of pods +Oct 27 15:10:07.040: INFO: created test-pod-1 +Oct 27 15:10:07.056: INFO: created test-pod-2 +Oct 27 15:10:07.077: INFO: created test-pod-3 +STEP: waiting for all 3 pods to be located +STEP: waiting for all pods to be deleted +Oct 27 15:10:07.121: INFO: Pod quantity 3 is different from expected quantity 0 +Oct 27 15:10:08.134: INFO: Pod quantity 3 is different from expected quantity 0 +Oct 27 15:10:09.134: INFO: Pod quantity 3 is different from expected quantity 0 +Oct 27 15:10:10.133: INFO: Pod quantity 3 is different from expected quantity 0 +Oct 27 15:10:11.134: INFO: Pod quantity 3 is different from expected quantity 0 +Oct 27 15:10:12.134: INFO: Pod quantity 3 is different from expected quantity 0 +[AfterEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:10:13.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-718" for this suite. +•{"msg":"PASSED [sig-node] Pods should delete a collection of pods [Conformance]","total":346,"completed":199,"skipped":3319,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with configmap pod [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:10:13.320: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename subpath +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in subpath-1138 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] Atomic writer volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 +STEP: Setting up data +[It] should support subpaths with configmap pod [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating pod pod-subpath-test-configmap-q9z9 +STEP: Creating a pod to test atomic-volume-subpath +Oct 27 15:10:13.550: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-q9z9" in namespace "subpath-1138" to be "Succeeded or Failed" +Oct 27 15:10:13.562: INFO: Pod "pod-subpath-test-configmap-q9z9": Phase="Pending", Reason="", readiness=false. Elapsed: 11.517258ms +Oct 27 15:10:15.575: INFO: Pod "pod-subpath-test-configmap-q9z9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025076081s +Oct 27 15:10:17.589: INFO: Pod "pod-subpath-test-configmap-q9z9": Phase="Running", Reason="", readiness=true. Elapsed: 4.0385054s +Oct 27 15:10:19.601: INFO: Pod "pod-subpath-test-configmap-q9z9": Phase="Running", Reason="", readiness=true. Elapsed: 6.050594284s +Oct 27 15:10:21.614: INFO: Pod "pod-subpath-test-configmap-q9z9": Phase="Running", Reason="", readiness=true. Elapsed: 8.063884573s +Oct 27 15:10:23.627: INFO: Pod "pod-subpath-test-configmap-q9z9": Phase="Running", Reason="", readiness=true. Elapsed: 10.07632711s +Oct 27 15:10:25.639: INFO: Pod "pod-subpath-test-configmap-q9z9": Phase="Running", Reason="", readiness=true. Elapsed: 12.088604184s +Oct 27 15:10:27.652: INFO: Pod "pod-subpath-test-configmap-q9z9": Phase="Running", Reason="", readiness=true. Elapsed: 14.10169629s +Oct 27 15:10:29.665: INFO: Pod "pod-subpath-test-configmap-q9z9": Phase="Running", Reason="", readiness=true. Elapsed: 16.114215809s +Oct 27 15:10:31.678: INFO: Pod "pod-subpath-test-configmap-q9z9": Phase="Running", Reason="", readiness=true. Elapsed: 18.127233981s +Oct 27 15:10:33.690: INFO: Pod "pod-subpath-test-configmap-q9z9": Phase="Running", Reason="", readiness=true. Elapsed: 20.139729725s +Oct 27 15:10:35.704: INFO: Pod "pod-subpath-test-configmap-q9z9": Phase="Running", Reason="", readiness=true. Elapsed: 22.153657703s +Oct 27 15:10:37.718: INFO: Pod "pod-subpath-test-configmap-q9z9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.167986558s +STEP: Saw pod success +Oct 27 15:10:37.736: INFO: Pod "pod-subpath-test-configmap-q9z9" satisfied condition "Succeeded or Failed" +Oct 27 15:10:37.748: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod pod-subpath-test-configmap-q9z9 container test-container-subpath-configmap-q9z9: +STEP: delete the pod +Oct 27 15:10:37.826: INFO: Waiting for pod pod-subpath-test-configmap-q9z9 to disappear +Oct 27 15:10:37.851: INFO: Pod pod-subpath-test-configmap-q9z9 no longer exists +STEP: Deleting pod pod-subpath-test-configmap-q9z9 +Oct 27 15:10:37.851: INFO: Deleting pod "pod-subpath-test-configmap-q9z9" in namespace "subpath-1138" +[AfterEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:10:37.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "subpath-1138" for this suite. +•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":346,"completed":200,"skipped":3360,"failed":0} +SSSSS +------------------------------ +[sig-storage] Secrets + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:10:37.896: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-9628 +STEP: Waiting for a default service account to be provisioned in namespace +[It] optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating secret with name s-test-opt-del-65428231-926c-4408-9e2a-94fc46f1a64a +STEP: Creating secret with name s-test-opt-upd-53590ee1-ae31-4305-8de1-ef27bfcc1479 +STEP: Creating the pod +Oct 27 15:10:38.158: INFO: The status of Pod pod-secrets-fcf97630-1855-415c-b84f-778154ec1f6d is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:10:40.171: INFO: The status of Pod pod-secrets-fcf97630-1855-415c-b84f-778154ec1f6d is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:10:42.172: INFO: The status of Pod pod-secrets-fcf97630-1855-415c-b84f-778154ec1f6d is Running (Ready = true) +STEP: Deleting secret s-test-opt-del-65428231-926c-4408-9e2a-94fc46f1a64a +STEP: Updating secret s-test-opt-upd-53590ee1-ae31-4305-8de1-ef27bfcc1479 +STEP: Creating secret with name s-test-opt-create-bf50243c-77ff-4c70-8937-e64ebafb3dc7 +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:12:10.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-9628" for this suite. +•{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":201,"skipped":3365,"failed":0} +S +------------------------------ +[sig-scheduling] LimitRange + should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-scheduling] LimitRange + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:12:10.792: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename limitrange +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in limitrange-8109 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a LimitRange +STEP: Setting up watch +STEP: Submitting a LimitRange +Oct 27 15:12:11.000: INFO: observed the limitRanges list +STEP: Verifying LimitRange creation was observed +STEP: Fetching the LimitRange to ensure it has proper values +Oct 27 15:12:11.024: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] +Oct 27 15:12:11.024: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] +STEP: Creating a Pod with no resource requirements +STEP: Ensuring Pod has resource requirements applied from LimitRange +Oct 27 15:12:11.064: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] +Oct 27 15:12:11.064: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] +STEP: Creating a Pod with partial resource requirements +STEP: Ensuring Pod has merged resource requirements applied from LimitRange +Oct 27 15:12:11.091: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] +Oct 27 15:12:11.091: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] +STEP: Failing to create a Pod with less than min resources +STEP: Failing to create a Pod with more than max resources +STEP: Updating a LimitRange +STEP: Verifying LimitRange updating is effective +STEP: Creating a Pod with less than former min resources +STEP: Failing to create a Pod with more than max resources +STEP: Deleting a LimitRange +STEP: Verifying the LimitRange was deleted +Oct 27 15:12:18.202: INFO: limitRange is already deleted +STEP: Creating a Pod with more than former max resources +[AfterEach] [sig-scheduling] LimitRange + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:12:18.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "limitrange-8109" for this suite. +•{"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":346,"completed":202,"skipped":3366,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl describe + should check if kubectl describe prints relevant information for rc and pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:12:18.261: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-7426 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should check if kubectl describe prints relevant information for rc and pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:12:18.446: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-7426 create -f -' +Oct 27 15:12:18.702: INFO: stderr: "" +Oct 27 15:12:18.702: INFO: stdout: "replicationcontroller/agnhost-primary created\n" +Oct 27 15:12:18.702: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-7426 create -f -' +Oct 27 15:12:18.974: INFO: stderr: "" +Oct 27 15:12:18.974: INFO: stdout: "service/agnhost-primary created\n" +STEP: Waiting for Agnhost primary to start. +Oct 27 15:12:19.989: INFO: Selector matched 1 pods for map[app:agnhost] +Oct 27 15:12:19.989: INFO: Found 0 / 1 +Oct 27 15:12:20.986: INFO: Selector matched 1 pods for map[app:agnhost] +Oct 27 15:12:20.986: INFO: Found 0 / 1 +Oct 27 15:12:21.989: INFO: Selector matched 1 pods for map[app:agnhost] +Oct 27 15:12:21.989: INFO: Found 1 / 1 +Oct 27 15:12:21.989: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 +Oct 27 15:12:22.001: INFO: Selector matched 1 pods for map[app:agnhost] +Oct 27 15:12:22.001: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +Oct 27 15:12:22.001: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-7426 describe pod agnhost-primary-mzcbl' +Oct 27 15:12:22.164: INFO: stderr: "" +Oct 27 15:12:22.164: INFO: stdout: "Name: agnhost-primary-mzcbl\nNamespace: kubectl-7426\nPriority: 0\nNode: shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2/10.250.0.4\nStart Time: Wed, 27 Oct 2021 15:12:18 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: cni.projectcalico.org/containerID: bdd33b7e694eb2f4f0cdd0a91b622219ab6c73f2daa2725fc9ae5891cf7de42f\n cni.projectcalico.org/podIP: 100.96.1.242/32\n cni.projectcalico.org/podIPs: 100.96.1.242/32\n kubernetes.io/psp: e2e-test-privileged-psp\nStatus: Running\nIP: 100.96.1.242\nIPs:\n IP: 100.96.1.242\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: containerd://3c33df186efaa3ca766d4cbe288a4ce1091cefda1a6ec7fce2bbdaebb3ad1d79\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.32\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Wed, 27 Oct 2021 15:12:20 +0000\n Ready: True\n Restart Count: 0\n Environment:\n KUBERNETES_SERVICE_HOST: api.tmgxs-skc.it.internal.staging.k8s.ondemand.com\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gbgj7 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-gbgj7:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 3s default-scheduler Successfully assigned kubectl-7426/agnhost-primary-mzcbl to shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2\n Normal Pulled 3s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\" already present on machine\n Normal Created 2s kubelet Created container agnhost-primary\n Normal Started 2s kubelet Started container agnhost-primary\n" +Oct 27 15:12:22.164: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-7426 describe rc agnhost-primary' +Oct 27 15:12:22.338: INFO: stderr: "" +Oct 27 15:12:22.339: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-7426\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.32\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: agnhost-primary-mzcbl\n" +Oct 27 15:12:22.339: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-7426 describe service agnhost-primary' +Oct 27 15:12:22.479: INFO: stderr: "" +Oct 27 15:12:22.479: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-7426\nLabels: app=agnhost\n role=primary\nAnnotations: \nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 100.65.150.90\nIPs: 100.65.150.90\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 100.96.1.242:6379\nSession Affinity: None\nEvents: \n" +Oct 27 15:12:22.501: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-7426 describe node shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8' +Oct 27 15:12:22.700: INFO: stderr: "" +Oct 27 15:12:22.700: INFO: stdout: "Name: shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8\nRoles: \nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/instance-type=Standard_DS2_v2\n beta.kubernetes.io/os=linux\n failure-domain.beta.kubernetes.io/region=northeurope\n failure-domain.beta.kubernetes.io/zone=1\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8\n kubernetes.io/os=linux\n node.kubernetes.io/instance-type=Standard_DS2_v2\n node.kubernetes.io/role=node\n topology.disk.csi.azure.com/zone=\n topology.kubernetes.io/region=northeurope\n topology.kubernetes.io/zone=1\n worker.garden.sapcloud.io/group=worker-1\n worker.gardener.cloud/cri-name=containerd\n worker.gardener.cloud/pool=worker-1\n worker.gardener.cloud/system-components=true\nAnnotations: checksum/cloud-config-data: 01d81794add9dbf8ff32bf54cc993ca948f9683087c87ccaf07be181685f828a\n csi.volume.kubernetes.io/nodeid:\n {\"disk.csi.azure.com\":\"shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8\",\"file.csi.azure.com\":\"shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8\"}\n node.alpha.kubernetes.io/ttl: 0\n node.machine.sapcloud.io/last-applied-anno-labels-taints:\n {\"metadata\":{\"creationTimestamp\":null,\"labels\":{\"node.kubernetes.io/role\":\"node\",\"worker.garden.sapcloud.io/group\":\"worker-1\",\"worker.gard...\n projectcalico.org/IPv4Address: 10.250.0.5/19\n projectcalico.org/IPv4IPIPTunnelAddr: 100.96.0.1\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Wed, 27 Oct 2021 13:56:10 +0000\nTaints: \nUnschedulable: false\nLease:\n HolderIdentity: shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8\n AcquireTime: \n RenewTime: Wed, 27 Oct 2021 15:12:13 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n FrequentKubeletRestart False Wed, 27 Oct 2021 15:07:56 +0000 Wed, 27 Oct 2021 14:07:48 +0000 NoFrequentKubeletRestart kubelet is functioning properly\n FrequentDockerRestart False Wed, 27 Oct 2021 15:07:56 +0000 Wed, 27 Oct 2021 14:07:48 +0000 NoFrequentDockerRestart docker is functioning properly\n FrequentContainerdRestart False Wed, 27 Oct 2021 15:07:56 +0000 Wed, 27 Oct 2021 14:07:48 +0000 NoFrequentContainerdRestart containerd is functioning properly\n KernelDeadlock False Wed, 27 Oct 2021 15:07:56 +0000 Wed, 27 Oct 2021 14:07:48 +0000 KernelHasNoDeadlock kernel has no deadlock\n ReadonlyFilesystem False Wed, 27 Oct 2021 15:07:56 +0000 Wed, 27 Oct 2021 14:07:48 +0000 FilesystemIsNotReadOnly Filesystem is not read-only\n CorruptDockerOverlay2 False Wed, 27 Oct 2021 15:07:56 +0000 Wed, 27 Oct 2021 14:07:49 +0000 NoCorruptDockerOverlay2 docker overlay2 is functioning properly\n FrequentUnregisterNetDevice False Wed, 27 Oct 2021 15:07:56 +0000 Wed, 27 Oct 2021 14:07:48 +0000 NoFrequentUnregisterNetDevice node is functioning properly\n NetworkUnavailable False Wed, 27 Oct 2021 13:56:44 +0000 Wed, 27 Oct 2021 13:56:44 +0000 RouteCreated RouteController created a route\n MemoryPressure False Wed, 27 Oct 2021 15:12:18 +0000 Wed, 27 Oct 2021 13:56:10 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Wed, 27 Oct 2021 15:12:18 +0000 Wed, 27 Oct 2021 13:56:10 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Wed, 27 Oct 2021 15:12:18 +0000 Wed, 27 Oct 2021 13:56:10 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Wed, 27 Oct 2021 15:12:18 +0000 Wed, 27 Oct 2021 13:56:50 +0000 KubeletReady kubelet is posting ready status. AppArmor enabled\nAddresses:\n InternalIP: 10.250.0.5\n Hostname: shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8\nCapacity:\n cpu: 2\n ephemeral-storage: 35011340Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 7126172Ki\n pods: 110\nAllocatable:\n cpu: 1920m\n ephemeral-storage: 34059031526\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 5975196Ki\n pods: 110\nSystem Info:\n Machine ID: 4ef871dfc4bf4111a1857efea39d46e8\n System UUID: e4f54f8e-8991-424c-bde6-5d9341776446\n Boot ID: 5ea65bc7-9a7a-4a5e-8598-e1e378a4940f\n Kernel Version: 5.4.0-7-cloud-amd64\n OS Image: Garden Linux 318.9\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.4.4\n Kubelet Version: v1.22.2\n Kube-Proxy Version: v1.22.2\nPodCIDR: 100.96.0.0/24\nPodCIDRs: 100.96.0.0/24\nProviderID: azure:///subscriptions/0b9904be-2a50-4fda-a947-c5f1b1d07666/resourceGroups/shoot--it--tmgxs-skc/providers/Microsoft.Compute/virtualMachines/shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8\nNon-terminated Pods: (19 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system addons-nginx-ingress-controller-76f55b7b5f-ffxv8 100m (5%) 400m (20%) 128Mi (2%) 512Mi (8%) 62m\n kube-system addons-nginx-ingress-nginx-ingress-k8s-backend-56d9d84c8c-w2blg 0 (0%) 0 (0%) 0 (0%) 0 (0%) 79m\n kube-system apiserver-proxy-vdnm2 40m (2%) 400m (20%) 40Mi (0%) 500Mi (8%) 76m\n kube-system calico-node-bmkxt 250m (13%) 800m (41%) 100Mi (1%) 700Mi (11%) 68m\n kube-system calico-node-vertical-autoscaler-785b5f968-sbxt6 10m (0%) 10m (0%) 50Mi (0%) 50Mi (0%) 79m\n kube-system calico-typha-deploy-546b97d4b5-kw64w 200m (10%) 500m (26%) 100Mi (1%) 700Mi (11%) 79m\n kube-system calico-typha-horizontal-autoscaler-5b58bb446c-p96rk 10m (0%) 10m (0%) 50Mi (0%) 50Mi (0%) 79m\n kube-system calico-typha-vertical-autoscaler-5c9655cddd-z7tgn 10m (0%) 10m (0%) 50Mi (0%) 50Mi (0%) 79m\n kube-system coredns-7649bdf444-cnjp5 50m (2%) 250m (13%) 15Mi (0%) 500Mi (8%) 78m\n kube-system coredns-7649bdf444-x6nkv 50m (2%) 250m (13%) 15Mi (0%) 500Mi (8%) 79m\n kube-system csi-driver-node-disk-tb5lc 40m (2%) 110m (5%) 114Mi (1%) 180Mi (3%) 76m\n kube-system csi-driver-node-file-8vk78 40m (2%) 110m (5%) 114Mi (1%) 180Mi (3%) 76m\n kube-system kube-proxy-7d5xq 34m (1%) 92m (4%) 61066436 (0%) 198265744 (3%) 15m\n kube-system metrics-server-5555d7587-mw896 50m (2%) 500m (26%) 150Mi (2%) 1Gi (17%) 79m\n kube-system node-exporter-fg8qw 50m (2%) 150m (7%) 50Mi (0%) 150Mi (2%) 76m\n kube-system node-problem-detector-bxt7r 49m (2%) 196m (10%) 49566436 (0%) 198265744 (3%) 64m\n kube-system vpn-shoot-7f6446d489-9kghs 100m (5%) 400m (20%) 100Mi (1%) 400Mi (6%) 79m\n kubernetes-dashboard dashboard-metrics-scraper-7ccbfc448f-jcrjk 0 (0%) 0 (0%) 0 (0%) 0 (0%) 79m\n kubernetes-dashboard kubernetes-dashboard-65d5f5c55-sf9qc 50m (2%) 200m (10%) 50Mi (0%) 200Mi (3%) 79m\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 1133m (59%) 4388m (228%)\n memory 1291329448 (21%) 6369220384 (104%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" +Oct 27 15:12:22.700: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-7426 describe namespace kubectl-7426' +Oct 27 15:12:22.858: INFO: stderr: "" +Oct 27 15:12:22.859: INFO: stdout: "Name: kubectl-7426\nLabels: e2e-framework=kubectl\n e2e-run=4b4774fa-5bfd-4874-8a0a-18f78a254440\n kubernetes.io/metadata.name=kubectl-7426\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:12:22.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-7426" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":346,"completed":203,"skipped":3376,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] CronJob + should schedule multiple jobs concurrently [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:12:22.893: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename cronjob +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in cronjob-6157 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should schedule multiple jobs concurrently [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a cronjob +STEP: Ensuring more than one job is running at a time +STEP: Ensuring at least two running jobs exists by listing jobs explicitly +STEP: Removing cronjob +[AfterEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:14:01.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "cronjob-6157" for this suite. +•{"msg":"PASSED [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","total":346,"completed":204,"skipped":3415,"failed":0} +SSSSSSS +------------------------------ +[sig-apps] Deployment + deployment should support rollover [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:14:01.170: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename deployment +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in deployment-3644 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 +[It] deployment should support rollover [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:14:01.384: INFO: Pod name rollover-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running +Oct 27 15:14:05.410: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready +Oct 27 15:14:07.423: INFO: Creating deployment "test-rollover-deployment" +Oct 27 15:14:07.450: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations +Oct 27 15:14:09.473: INFO: Check revision of new replica set for deployment "test-rollover-deployment" +Oct 27 15:14:09.495: INFO: Ensure that both replica sets have 1 created replica +Oct 27 15:14:09.517: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update +Oct 27 15:14:09.545: INFO: Updating deployment test-rollover-deployment +Oct 27 15:14:09.545: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller +Oct 27 15:14:11.568: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 +Oct 27 15:14:11.591: INFO: Make sure deployment "test-rollover-deployment" is complete +Oct 27 15:14:11.613: INFO: all replica sets need to contain the pod-template-hash label +Oct 27 15:14:11.613: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944447, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944447, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944449, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944447, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 27 15:14:13.635: INFO: all replica sets need to contain the pod-template-hash label +Oct 27 15:14:13.635: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944447, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944447, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944451, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944447, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 27 15:14:15.638: INFO: all replica sets need to contain the pod-template-hash label +Oct 27 15:14:15.638: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944447, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944447, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944451, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944447, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 27 15:14:17.638: INFO: all replica sets need to contain the pod-template-hash label +Oct 27 15:14:17.638: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944447, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944447, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944451, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944447, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 27 15:14:19.641: INFO: all replica sets need to contain the pod-template-hash label +Oct 27 15:14:19.641: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944447, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944447, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944451, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944447, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 27 15:14:21.641: INFO: all replica sets need to contain the pod-template-hash label +Oct 27 15:14:21.641: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944447, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944447, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944451, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944447, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 27 15:14:23.637: INFO: +Oct 27 15:14:23.637: INFO: Ensure that both old replica sets have no replicas +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 +Oct 27 15:14:23.669: INFO: Deployment "test-rollover-deployment": +&Deployment{ObjectMeta:{test-rollover-deployment deployment-3644 f27547fb-972d-4955-afb8-0f7f1d05d333 37430 2 2021-10-27 15:14:07 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-10-27 15:14:09 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 15:14:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc006fd2978 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-10-27 15:14:07 +0000 UTC,LastTransitionTime:2021-10-27 15:14:07 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-98c5f4599" has successfully progressed.,LastUpdateTime:2021-10-27 15:14:21 +0000 UTC,LastTransitionTime:2021-10-27 15:14:07 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} + +Oct 27 15:14:23.681: INFO: New ReplicaSet "test-rollover-deployment-98c5f4599" of Deployment "test-rollover-deployment": +&ReplicaSet{ObjectMeta:{test-rollover-deployment-98c5f4599 deployment-3644 66d8870c-57b4-458a-9a3d-48a972f72320 37423 2 2021-10-27 15:14:09 +0000 UTC map[name:rollover-pod pod-template-hash:98c5f4599] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment f27547fb-972d-4955-afb8-0f7f1d05d333 0xc006fd2f50 0xc006fd2f51}] [] [{kube-controller-manager Update apps/v1 2021-10-27 15:14:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f27547fb-972d-4955-afb8-0f7f1d05d333\"}":{}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 15:14:21 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 98c5f4599,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:98c5f4599] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc006fd2fe8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} +Oct 27 15:14:23.681: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": +Oct 27 15:14:23.681: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-3644 42a0180c-54f2-4e49-830f-608c074632af 37429 2 2021-10-27 15:14:01 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment f27547fb-972d-4955-afb8-0f7f1d05d333 0xc006fd2d07 0xc006fd2d08}] [] [{e2e.test Update apps/v1 2021-10-27 15:14:01 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 15:14:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f27547fb-972d-4955-afb8-0f7f1d05d333\"}":{}}},"f:spec":{"f:replicas":{}}} } {kube-controller-manager Update apps/v1 2021-10-27 15:14:21 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc006fd2dc8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Oct 27 15:14:23.681: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-78bc8b888c deployment-3644 df11c049-6d2f-4cf6-8b3b-28e3d03f3106 37349 2 2021-10-27 15:14:07 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment f27547fb-972d-4955-afb8-0f7f1d05d333 0xc006fd2e37 0xc006fd2e38}] [] [{kube-controller-manager Update apps/v1 2021-10-27 15:14:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f27547fb-972d-4955-afb8-0f7f1d05d333\"}":{}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 15:14:09 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78bc8b888c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc006fd2ee8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Oct 27 15:14:23.696: INFO: Pod "test-rollover-deployment-98c5f4599-r5qb6" is available: +&Pod{ObjectMeta:{test-rollover-deployment-98c5f4599-r5qb6 test-rollover-deployment-98c5f4599- deployment-3644 61e336cf-61a1-4982-8ef0-7e6dd296c54b 37370 0 2021-10-27 15:14:09 +0000 UTC map[name:rollover-pod pod-template-hash:98c5f4599] map[cni.projectcalico.org/containerID:3855106a464946d08e0924d398f752fd354dffc5715e064814e95efca6dab4f0 cni.projectcalico.org/podIP:100.96.1.247/32 cni.projectcalico.org/podIPs:100.96.1.247/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet test-rollover-deployment-98c5f4599 66d8870c-57b4-458a-9a3d-48a972f72320 0xc006fd3550 0xc006fd3551}] [] [{kube-controller-manager Update v1 2021-10-27 15:14:09 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"66d8870c-57b4-458a-9a3d-48a972f72320\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-27 15:14:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-27 15:14:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.1.247\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-h4bhd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmgxs-skc.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h4bhd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:14:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:14:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:14:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:14:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.4,PodIP:100.96.1.247,StartTime:2021-10-27 15:14:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-27 15:14:11 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:containerd://7a0e7dfc76a4ba0475efe0ceea089c8a6af92f7a1e2717b7f70c9622396f8136,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.1.247,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:14:23.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-3644" for this suite. +•{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":346,"completed":205,"skipped":3422,"failed":0} +SSSSS +------------------------------ +[sig-node] KubeletManagedEtcHosts + should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] KubeletManagedEtcHosts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:14:23.817: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-kubelet-etc-hosts-3670 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Setting up the test +STEP: Creating hostNetwork=false pod +Oct 27 15:14:24.036: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:14:26.049: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:14:28.049: INFO: The status of Pod test-pod is Running (Ready = true) +STEP: Creating hostNetwork=true pod +Oct 27 15:14:28.091: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:14:30.105: INFO: The status of Pod test-host-network-pod is Running (Ready = true) +STEP: Running the test +STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false +Oct 27 15:14:30.116: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3670 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 15:14:30.116: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 15:14:30.491: INFO: Exec stderr: "" +Oct 27 15:14:30.491: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3670 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 15:14:30.491: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 15:14:30.908: INFO: Exec stderr: "" +Oct 27 15:14:30.908: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3670 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 15:14:30.908: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 15:14:31.302: INFO: Exec stderr: "" +Oct 27 15:14:31.302: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3670 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 15:14:31.302: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 15:14:31.679: INFO: Exec stderr: "" +STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount +Oct 27 15:14:31.679: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3670 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 15:14:31.679: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 15:14:32.063: INFO: Exec stderr: "" +Oct 27 15:14:32.063: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3670 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 15:14:32.063: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 15:14:32.480: INFO: Exec stderr: "" +STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true +Oct 27 15:14:32.480: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3670 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 15:14:32.480: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 15:14:32.863: INFO: Exec stderr: "" +Oct 27 15:14:32.864: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3670 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 15:14:32.864: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 15:14:33.315: INFO: Exec stderr: "" +Oct 27 15:14:33.315: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3670 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 15:14:33.317: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 15:14:33.752: INFO: Exec stderr: "" +Oct 27 15:14:33.752: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3670 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 15:14:33.752: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 15:14:34.168: INFO: Exec stderr: "" +[AfterEach] [sig-node] KubeletManagedEtcHosts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:14:34.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-kubelet-etc-hosts-3670" for this suite. +•{"msg":"PASSED [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":206,"skipped":3427,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl patch + should add annotations for pods in rc [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:14:34.204: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-6797 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should add annotations for pods in rc [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating Agnhost RC +Oct 27 15:14:34.387: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-6797 create -f -' +Oct 27 15:14:34.873: INFO: stderr: "" +Oct 27 15:14:34.873: INFO: stdout: "replicationcontroller/agnhost-primary created\n" +STEP: Waiting for Agnhost primary to start. +Oct 27 15:14:35.886: INFO: Selector matched 1 pods for map[app:agnhost] +Oct 27 15:14:35.886: INFO: Found 0 / 1 +Oct 27 15:14:36.886: INFO: Selector matched 1 pods for map[app:agnhost] +Oct 27 15:14:36.886: INFO: Found 0 / 1 +Oct 27 15:14:37.886: INFO: Selector matched 1 pods for map[app:agnhost] +Oct 27 15:14:37.886: INFO: Found 1 / 1 +Oct 27 15:14:37.886: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 +STEP: patching all pods +Oct 27 15:14:37.898: INFO: Selector matched 1 pods for map[app:agnhost] +Oct 27 15:14:37.898: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +Oct 27 15:14:37.898: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-6797 patch pod agnhost-primary-cjbcw -p {"metadata":{"annotations":{"x":"y"}}}' +Oct 27 15:14:38.002: INFO: stderr: "" +Oct 27 15:14:38.002: INFO: stdout: "pod/agnhost-primary-cjbcw patched\n" +STEP: checking annotations +Oct 27 15:14:38.014: INFO: Selector matched 1 pods for map[app:agnhost] +Oct 27 15:14:38.014: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:14:38.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-6797" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":346,"completed":207,"skipped":3437,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + updates the published spec when one version gets renamed [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:14:38.052: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-590 +STEP: Waiting for a default service account to be provisioned in namespace +[It] updates the published spec when one version gets renamed [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: set up a multi version CRD +Oct 27 15:14:38.237: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: rename a version +STEP: check the new version name is served +STEP: check the old version name is removed +STEP: check the other version is not changed +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:14:59.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-590" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":346,"completed":208,"skipped":3450,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:14:59.259: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-7882 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 27 15:14:59.468: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ea7ae8f4-1b56-476c-82fe-b78dfad013ba" in namespace "downward-api-7882" to be "Succeeded or Failed" +Oct 27 15:14:59.479: INFO: Pod "downwardapi-volume-ea7ae8f4-1b56-476c-82fe-b78dfad013ba": Phase="Pending", Reason="", readiness=false. Elapsed: 11.171206ms +Oct 27 15:15:01.492: INFO: Pod "downwardapi-volume-ea7ae8f4-1b56-476c-82fe-b78dfad013ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024303915s +Oct 27 15:15:03.505: INFO: Pod "downwardapi-volume-ea7ae8f4-1b56-476c-82fe-b78dfad013ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037020978s +STEP: Saw pod success +Oct 27 15:15:03.505: INFO: Pod "downwardapi-volume-ea7ae8f4-1b56-476c-82fe-b78dfad013ba" satisfied condition "Succeeded or Failed" +Oct 27 15:15:03.517: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod downwardapi-volume-ea7ae8f4-1b56-476c-82fe-b78dfad013ba container client-container: +STEP: delete the pod +Oct 27 15:15:03.623: INFO: Waiting for pod downwardapi-volume-ea7ae8f4-1b56-476c-82fe-b78dfad013ba to disappear +Oct 27 15:15:03.633: INFO: Pod downwardapi-volume-ea7ae8f4-1b56-476c-82fe-b78dfad013ba no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:15:03.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-7882" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":346,"completed":209,"skipped":3488,"failed":0} +S +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:15:03.668: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-261 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating secret with name secret-test-652f09a8-a862-4207-a1bd-c3a4038181da +STEP: Creating a pod to test consume secrets +Oct 27 15:15:03.885: INFO: Waiting up to 5m0s for pod "pod-secrets-21d58aa0-26ea-4ff3-a490-7fa0b95a910f" in namespace "secrets-261" to be "Succeeded or Failed" +Oct 27 15:15:03.896: INFO: Pod "pod-secrets-21d58aa0-26ea-4ff3-a490-7fa0b95a910f": Phase="Pending", Reason="", readiness=false. Elapsed: 11.530863ms +Oct 27 15:15:05.909: INFO: Pod "pod-secrets-21d58aa0-26ea-4ff3-a490-7fa0b95a910f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024414209s +Oct 27 15:15:07.923: INFO: Pod "pod-secrets-21d58aa0-26ea-4ff3-a490-7fa0b95a910f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038526764s +STEP: Saw pod success +Oct 27 15:15:07.923: INFO: Pod "pod-secrets-21d58aa0-26ea-4ff3-a490-7fa0b95a910f" satisfied condition "Succeeded or Failed" +Oct 27 15:15:07.935: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod pod-secrets-21d58aa0-26ea-4ff3-a490-7fa0b95a910f container secret-volume-test: +STEP: delete the pod +Oct 27 15:15:08.047: INFO: Waiting for pod pod-secrets-21d58aa0-26ea-4ff3-a490-7fa0b95a910f to disappear +Oct 27 15:15:08.060: INFO: Pod pod-secrets-21d58aa0-26ea-4ff3-a490-7fa0b95a910f no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:15:08.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-261" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":210,"skipped":3489,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:15:08.096: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-6832 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating projection with secret that has name projected-secret-test-cbcb1c5d-65d4-49e2-a214-0dfa81efa6d8 +STEP: Creating a pod to test consume secrets +Oct 27 15:15:08.313: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f18b12f0-2112-40f7-b9b1-70957b43a2b5" in namespace "projected-6832" to be "Succeeded or Failed" +Oct 27 15:15:08.325: INFO: Pod "pod-projected-secrets-f18b12f0-2112-40f7-b9b1-70957b43a2b5": Phase="Pending", Reason="", readiness=false. Elapsed: 11.199185ms +Oct 27 15:15:10.336: INFO: Pod "pod-projected-secrets-f18b12f0-2112-40f7-b9b1-70957b43a2b5": Phase="Running", Reason="", readiness=true. Elapsed: 2.022642861s +Oct 27 15:15:12.348: INFO: Pod "pod-projected-secrets-f18b12f0-2112-40f7-b9b1-70957b43a2b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034903268s +STEP: Saw pod success +Oct 27 15:15:12.348: INFO: Pod "pod-projected-secrets-f18b12f0-2112-40f7-b9b1-70957b43a2b5" satisfied condition "Succeeded or Failed" +Oct 27 15:15:12.360: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod pod-projected-secrets-f18b12f0-2112-40f7-b9b1-70957b43a2b5 container projected-secret-volume-test: +STEP: delete the pod +Oct 27 15:15:12.424: INFO: Waiting for pod pod-projected-secrets-f18b12f0-2112-40f7-b9b1-70957b43a2b5 to disappear +Oct 27 15:15:12.435: INFO: Pod pod-projected-secrets-f18b12f0-2112-40f7-b9b1-70957b43a2b5 no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:15:12.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-6832" for this suite. +•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":346,"completed":211,"skipped":3546,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Deployment + Deployment should have a working scale subresource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:15:12.470: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename deployment +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in deployment-3418 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 +[It] Deployment should have a working scale subresource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:15:12.655: INFO: Creating simple deployment test-new-deployment +Oct 27 15:15:12.700: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944512, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944512, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944512, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944512, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-847dcfb7fb\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 27 15:15:14.713: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944512, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944512, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944512, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944512, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-847dcfb7fb\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: getting scale subresource +STEP: updating a scale subresource +STEP: verifying the deployment Spec.Replicas was modified +STEP: Patch a scale subresource +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 +Oct 27 15:15:16.796: INFO: Deployment "test-new-deployment": +&Deployment{ObjectMeta:{test-new-deployment deployment-3418 8d0e2047-c2c1-48f7-a4ec-8190ccfd281e 37939 3 2021-10-27 15:15:12 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 FieldsV1 {"f:spec":{"f:replicas":{}}} scale} {e2e.test Update apps/v1 2021-10-27 15:15:12 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 15:15:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*4,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0052fbb88 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:2,UpdatedReplicas:2,AvailableReplicas:1,UnavailableReplicas:3,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-new-deployment-847dcfb7fb" has successfully progressed.,LastUpdateTime:2021-10-27 15:15:15 +0000 UTC,LastTransitionTime:2021-10-27 15:15:12 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2021-10-27 15:15:16 +0000 UTC,LastTransitionTime:2021-10-27 15:15:16 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} + +Oct 27 15:15:16.811: INFO: New ReplicaSet "test-new-deployment-847dcfb7fb" of Deployment "test-new-deployment": +&ReplicaSet{ObjectMeta:{test-new-deployment-847dcfb7fb deployment-3418 b208fa13-1534-4650-8965-f3d14dd93976 37937 3 2021-10-27 15:15:12 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[deployment.kubernetes.io/desired-replicas:4 deployment.kubernetes.io/max-replicas:5 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-new-deployment 8d0e2047-c2c1-48f7-a4ec-8190ccfd281e 0xc005536077 0xc005536078}] [] [{kube-controller-manager Update apps/v1 2021-10-27 15:15:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8d0e2047-c2c1-48f7-a4ec-8190ccfd281e\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 15:15:15 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*4,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 847dcfb7fb,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005536128 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:2,FullyLabeledReplicas:2,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} +Oct 27 15:15:16.823: INFO: Pod "test-new-deployment-847dcfb7fb-cvcw8" is available: +&Pod{ObjectMeta:{test-new-deployment-847dcfb7fb-cvcw8 test-new-deployment-847dcfb7fb- deployment-3418 b7d90c60-61e3-4ab2-8137-c34ed2ae8d38 37911 0 2021-10-27 15:15:12 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/containerID:6d42282b6976f1c14aef1a49b50fd335a242d75af97e7c485962c694cc9e915a cni.projectcalico.org/podIP:100.96.1.253/32 cni.projectcalico.org/podIPs:100.96.1.253/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet test-new-deployment-847dcfb7fb b208fa13-1534-4650-8965-f3d14dd93976 0xc0054f8337 0xc0054f8338}] [] [{kube-controller-manager Update v1 2021-10-27 15:15:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b208fa13-1534-4650-8965-f3d14dd93976\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-27 15:15:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-27 15:15:15 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.1.253\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-cwdtk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmgxs-skc.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cwdtk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:15:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:15:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:15:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:15:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.4,PodIP:100.96.1.253,StartTime:2021-10-27 15:15:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-27 15:15:14 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://9b1fd80f2d2066d7acafc27341257c7a1d32c9c330618b337eee56af10c21bb9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.1.253,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:15:16.823: INFO: Pod "test-new-deployment-847dcfb7fb-xjmbk" is not available: +&Pod{ObjectMeta:{test-new-deployment-847dcfb7fb-xjmbk test-new-deployment-847dcfb7fb- deployment-3418 ab0385e8-9f1a-4ceb-aad5-44d6dc904dcf 37941 0 2021-10-27 15:15:16 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet test-new-deployment-847dcfb7fb b208fa13-1534-4650-8965-f3d14dd93976 0xc0054f85b0 0xc0054f85b1}] [] [{kube-controller-manager Update v1 2021-10-27 15:15:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b208fa13-1534-4650-8965-f3d14dd93976\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-5kzbz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmgxs-skc.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5kzbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:15:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:15:16.823: INFO: Pod "test-new-deployment-847dcfb7fb-xlh9j" is not available: +&Pod{ObjectMeta:{test-new-deployment-847dcfb7fb-xlh9j test-new-deployment-847dcfb7fb- deployment-3418 282bce1e-9b9d-4a00-98f2-f9e1ef95ca2a 37945 0 2021-10-27 15:15:16 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet test-new-deployment-847dcfb7fb b208fa13-1534-4650-8965-f3d14dd93976 0xc0054f8740 0xc0054f8741}] [] [{kube-controller-manager Update v1 2021-10-27 15:15:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b208fa13-1534-4650-8965-f3d14dd93976\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:15:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-xnp24,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmgxs-skc.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xnp24,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:15:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:15:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:15:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:15:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.5,PodIP:,StartTime:2021-10-27 15:15:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:15:16.824: INFO: Pod "test-new-deployment-847dcfb7fb-zfjjz" is not available: +&Pod{ObjectMeta:{test-new-deployment-847dcfb7fb-zfjjz test-new-deployment-847dcfb7fb- deployment-3418 e420e3d6-7af0-4787-8796-4e31c6a6fd14 37943 0 2021-10-27 15:15:16 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet test-new-deployment-847dcfb7fb b208fa13-1534-4650-8965-f3d14dd93976 0xc0054f8950 0xc0054f8951}] [] [{kube-controller-manager Update v1 2021-10-27 15:15:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b208fa13-1534-4650-8965-f3d14dd93976\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-cl6wp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmgxs-skc.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cl6wp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:15:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:15:16.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-3418" for this suite. +•{"msg":"PASSED [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]","total":346,"completed":212,"skipped":3569,"failed":0} +SSSSSS +------------------------------ +[sig-node] Downward API + should provide pod UID as env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:15:16.857: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-6992 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide pod UID as env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward api env vars +Oct 27 15:15:17.058: INFO: Waiting up to 5m0s for pod "downward-api-6d079231-e0cd-431d-8101-177cce1e83ff" in namespace "downward-api-6992" to be "Succeeded or Failed" +Oct 27 15:15:17.069: INFO: Pod "downward-api-6d079231-e0cd-431d-8101-177cce1e83ff": Phase="Pending", Reason="", readiness=false. Elapsed: 11.33707ms +Oct 27 15:15:19.081: INFO: Pod "downward-api-6d079231-e0cd-431d-8101-177cce1e83ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023474515s +Oct 27 15:15:21.094: INFO: Pod "downward-api-6d079231-e0cd-431d-8101-177cce1e83ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036011218s +STEP: Saw pod success +Oct 27 15:15:21.094: INFO: Pod "downward-api-6d079231-e0cd-431d-8101-177cce1e83ff" satisfied condition "Succeeded or Failed" +Oct 27 15:15:21.105: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod downward-api-6d079231-e0cd-431d-8101-177cce1e83ff container dapi-container: +STEP: delete the pod +Oct 27 15:15:21.178: INFO: Waiting for pod downward-api-6d079231-e0cd-431d-8101-177cce1e83ff to disappear +Oct 27 15:15:21.190: INFO: Pod downward-api-6d079231-e0cd-431d-8101-177cce1e83ff no longer exists +[AfterEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:15:21.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-6992" for this suite. +•{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":346,"completed":213,"skipped":3575,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] DNS + should provide DNS for pods for Subdomain [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:15:21.225: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename dns +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-9095 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide DNS for pods for Subdomain [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a test headless service +STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9095.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-9095.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9095.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9095.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-9095.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-9095.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-9095.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-9095.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9095.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9095.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-9095.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9095.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-9095.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-9095.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-9095.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-9095.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-9095.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9095.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done + +STEP: creating a pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Oct 27 15:15:27.596: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9095.svc.cluster.local from pod dns-9095/dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6: the server could not find the requested resource (get pods dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6) +Oct 27 15:15:27.641: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9095.svc.cluster.local from pod dns-9095/dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6: the server could not find the requested resource (get pods dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6) +Oct 27 15:15:27.720: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9095.svc.cluster.local from pod dns-9095/dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6: the server could not find the requested resource (get pods dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6) +Oct 27 15:15:27.795: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9095.svc.cluster.local from pod dns-9095/dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6: the server could not find the requested resource (get pods dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6) +Oct 27 15:15:27.888: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9095.svc.cluster.local from pod dns-9095/dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6: the server could not find the requested resource (get pods dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6) +Oct 27 15:15:27.919: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9095.svc.cluster.local from pod dns-9095/dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6: the server could not find the requested resource (get pods dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6) +Oct 27 15:15:27.950: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9095.svc.cluster.local from pod dns-9095/dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6: the server could not find the requested resource (get pods dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6) +Oct 27 15:15:27.980: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9095.svc.cluster.local from pod dns-9095/dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6: the server could not find the requested resource (get pods dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6) +Oct 27 15:15:28.039: INFO: Lookups using dns-9095/dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9095.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9095.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9095.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9095.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9095.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9095.svc.cluster.local jessie_udp@dns-test-service-2.dns-9095.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9095.svc.cluster.local] + +Oct 27 15:15:33.073: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9095.svc.cluster.local from pod dns-9095/dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6: the server could not find the requested resource (get pods dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6) +Oct 27 15:15:33.137: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9095.svc.cluster.local from pod dns-9095/dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6: the server could not find the requested resource (get pods dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6) +Oct 27 15:15:33.180: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9095.svc.cluster.local from pod dns-9095/dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6: the server could not find the requested resource (get pods dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6) +Oct 27 15:15:33.210: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9095.svc.cluster.local from pod dns-9095/dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6: the server could not find the requested resource (get pods dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6) +Oct 27 15:15:33.301: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9095.svc.cluster.local from pod dns-9095/dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6: the server could not find the requested resource (get pods dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6) +Oct 27 15:15:33.331: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9095.svc.cluster.local from pod dns-9095/dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6: the server could not find the requested resource (get pods dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6) +Oct 27 15:15:33.362: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9095.svc.cluster.local from pod dns-9095/dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6: the server could not find the requested resource (get pods dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6) +Oct 27 15:15:33.394: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9095.svc.cluster.local from pod dns-9095/dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6: the server could not find the requested resource (get pods dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6) +Oct 27 15:15:33.455: INFO: Lookups using dns-9095/dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9095.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9095.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9095.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9095.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9095.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9095.svc.cluster.local jessie_udp@dns-test-service-2.dns-9095.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9095.svc.cluster.local] + +Oct 27 15:15:38.072: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9095.svc.cluster.local from pod dns-9095/dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6: the server could not find the requested resource (get pods dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6) +Oct 27 15:15:38.132: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9095.svc.cluster.local from pod dns-9095/dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6: the server could not find the requested resource (get pods dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6) +Oct 27 15:15:38.162: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9095.svc.cluster.local from pod dns-9095/dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6: the server could not find the requested resource (get pods dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6) +Oct 27 15:15:38.193: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9095.svc.cluster.local from pod dns-9095/dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6: the server could not find the requested resource (get pods dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6) +Oct 27 15:15:38.283: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9095.svc.cluster.local from pod dns-9095/dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6: the server could not find the requested resource (get pods dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6) +Oct 27 15:15:38.336: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9095.svc.cluster.local from pod dns-9095/dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6: the server could not find the requested resource (get pods dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6) +Oct 27 15:15:38.366: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9095.svc.cluster.local from pod dns-9095/dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6: the server could not find the requested resource (get pods dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6) +Oct 27 15:15:38.396: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9095.svc.cluster.local from pod dns-9095/dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6: the server could not find the requested resource (get pods dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6) +Oct 27 15:15:38.455: INFO: Lookups using dns-9095/dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9095.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9095.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9095.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9095.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9095.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9095.svc.cluster.local jessie_udp@dns-test-service-2.dns-9095.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9095.svc.cluster.local] + +Oct 27 15:15:43.071: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9095.svc.cluster.local from pod dns-9095/dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6: the server could not find the requested resource (get pods dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6) +Oct 27 15:15:43.105: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9095.svc.cluster.local from pod dns-9095/dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6: the server could not find the requested resource (get pods dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6) +Oct 27 15:15:43.134: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9095.svc.cluster.local from pod dns-9095/dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6: the server could not find the requested resource (get pods dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6) +Oct 27 15:15:43.197: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9095.svc.cluster.local from pod dns-9095/dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6: the server could not find the requested resource (get pods dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6) +Oct 27 15:15:43.287: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9095.svc.cluster.local from pod dns-9095/dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6: the server could not find the requested resource (get pods dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6) +Oct 27 15:15:43.318: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9095.svc.cluster.local from pod dns-9095/dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6: the server could not find the requested resource (get pods dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6) +Oct 27 15:15:43.349: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9095.svc.cluster.local from pod dns-9095/dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6: the server could not find the requested resource (get pods dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6) +Oct 27 15:15:43.379: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9095.svc.cluster.local from pod dns-9095/dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6: the server could not find the requested resource (get pods dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6) +Oct 27 15:15:43.439: INFO: Lookups using dns-9095/dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9095.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9095.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9095.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9095.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9095.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9095.svc.cluster.local jessie_udp@dns-test-service-2.dns-9095.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9095.svc.cluster.local] + +Oct 27 15:15:48.071: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9095.svc.cluster.local from pod dns-9095/dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6: the server could not find the requested resource (get pods dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6) +Oct 27 15:15:48.101: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9095.svc.cluster.local from pod dns-9095/dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6: the server could not find the requested resource (get pods dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6) +Oct 27 15:15:48.130: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9095.svc.cluster.local from pod dns-9095/dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6: the server could not find the requested resource (get pods dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6) +Oct 27 15:15:48.194: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9095.svc.cluster.local from pod dns-9095/dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6: the server could not find the requested resource (get pods dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6) +Oct 27 15:15:48.282: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9095.svc.cluster.local from pod dns-9095/dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6: the server could not find the requested resource (get pods dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6) +Oct 27 15:15:48.322: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9095.svc.cluster.local from pod dns-9095/dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6: the server could not find the requested resource (get pods dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6) +Oct 27 15:15:48.434: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9095.svc.cluster.local from pod dns-9095/dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6: the server could not find the requested resource (get pods dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6) +Oct 27 15:15:48.463: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9095.svc.cluster.local from pod dns-9095/dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6: the server could not find the requested resource (get pods dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6) +Oct 27 15:15:48.556: INFO: Lookups using dns-9095/dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9095.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9095.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9095.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9095.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9095.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9095.svc.cluster.local jessie_udp@dns-test-service-2.dns-9095.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9095.svc.cluster.local] + +Oct 27 15:15:53.071: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9095.svc.cluster.local from pod dns-9095/dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6: the server could not find the requested resource (get pods dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6) +Oct 27 15:15:53.101: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9095.svc.cluster.local from pod dns-9095/dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6: the server could not find the requested resource (get pods dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6) +Oct 27 15:15:53.168: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9095.svc.cluster.local from pod dns-9095/dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6: the server could not find the requested resource (get pods dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6) +Oct 27 15:15:53.197: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9095.svc.cluster.local from pod dns-9095/dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6: the server could not find the requested resource (get pods dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6) +Oct 27 15:15:53.295: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9095.svc.cluster.local from pod dns-9095/dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6: the server could not find the requested resource (get pods dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6) +Oct 27 15:15:53.324: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9095.svc.cluster.local from pod dns-9095/dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6: the server could not find the requested resource (get pods dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6) +Oct 27 15:15:53.354: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9095.svc.cluster.local from pod dns-9095/dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6: the server could not find the requested resource (get pods dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6) +Oct 27 15:15:53.385: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9095.svc.cluster.local from pod dns-9095/dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6: the server could not find the requested resource (get pods dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6) +Oct 27 15:15:53.460: INFO: Lookups using dns-9095/dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9095.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9095.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9095.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9095.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9095.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9095.svc.cluster.local jessie_udp@dns-test-service-2.dns-9095.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9095.svc.cluster.local] + +Oct 27 15:15:58.430: INFO: DNS probes using dns-9095/dns-test-0e86ad0b-3731-4441-9b25-e6fdfa1864b6 succeeded + +STEP: deleting the pod +STEP: deleting the test headless service +[AfterEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:15:58.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-9095" for this suite. +•{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":346,"completed":214,"skipped":3686,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should mutate pod and apply defaults after mutation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:15:58.502: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-2524 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 27 15:15:59.348: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944559, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944559, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944559, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944559, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 27 15:16:01.361: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944559, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944559, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944559, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944559, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 27 15:16:04.381: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should mutate pod and apply defaults after mutation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Registering the mutating pod webhook via the AdmissionRegistration API +STEP: create a pod that should be updated by the webhook +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:16:04.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-2524" for this suite. +STEP: Destroying namespace "webhook-2524-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":346,"completed":215,"skipped":3709,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should mutate custom resource with pruning [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:16:04.893: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-513 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 27 15:16:05.396: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944565, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944565, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944565, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944565, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 27 15:16:07.408: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944565, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944565, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944565, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944565, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 27 15:16:10.431: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should mutate custom resource with pruning [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:16:10.443: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2731-crds.webhook.example.com via the AdmissionRegistration API +STEP: Creating a custom resource that should be mutated by the webhook +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:16:14.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-513" for this suite. +STEP: Destroying namespace "webhook-513-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":346,"completed":216,"skipped":3747,"failed":0} +SSSS +------------------------------ +[sig-node] Container Runtime blackbox test on terminated container + should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:16:14.213: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-runtime +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-1704 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the container +STEP: wait for the container to reach Failed +STEP: get the container status +STEP: the container should be terminated +STEP: the termination message should be set +Oct 27 15:16:17.951: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- +STEP: delete the container +[AfterEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:16:17.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-runtime-1704" for this suite. +•{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":346,"completed":217,"skipped":3751,"failed":0} +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Container Runtime blackbox test on terminated container + should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:16:18.018: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-runtime +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-9428 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the container +STEP: wait for the container to reach Succeeded +STEP: get the container status +STEP: the container should be terminated +STEP: the termination message should be set +Oct 27 15:16:21.276: INFO: Expected: &{} to match Container's Termination Message: -- +STEP: delete the container +[AfterEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:16:21.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-runtime-9428" for this suite. +•{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":346,"completed":218,"skipped":3773,"failed":0} +SSS +------------------------------ +[sig-cli] Kubectl client Kubectl version + should check is all data is printed [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:16:21.338: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-5077 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should check is all data is printed [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:16:21.522: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-5077 version' +Oct 27 15:16:21.621: INFO: stderr: "" +Oct 27 15:16:21.621: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"22\", GitVersion:\"v1.22.2\", GitCommit:\"8b5a19147530eaac9476b0ab82980b4088bbc1b2\", GitTreeState:\"clean\", BuildDate:\"2021-09-15T21:38:50Z\", GoVersion:\"go1.16.8\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"22\", GitVersion:\"v1.22.2\", GitCommit:\"8b5a19147530eaac9476b0ab82980b4088bbc1b2\", GitTreeState:\"clean\", BuildDate:\"2021-09-15T21:32:41Z\", GoVersion:\"go1.16.8\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:16:21.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-5077" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":346,"completed":219,"skipped":3776,"failed":0} +SSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:16:21.657: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-2659 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name configmap-test-volume-map-6b7b0487-0ec2-43c7-9245-d1ff67ee9f63 +STEP: Creating a pod to test consume configMaps +Oct 27 15:16:21.873: INFO: Waiting up to 5m0s for pod "pod-configmaps-3095084e-518a-4bc0-8739-99cb444b0127" in namespace "configmap-2659" to be "Succeeded or Failed" +Oct 27 15:16:21.885: INFO: Pod "pod-configmaps-3095084e-518a-4bc0-8739-99cb444b0127": Phase="Pending", Reason="", readiness=false. Elapsed: 11.15068ms +Oct 27 15:16:23.898: INFO: Pod "pod-configmaps-3095084e-518a-4bc0-8739-99cb444b0127": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024271678s +Oct 27 15:16:25.911: INFO: Pod "pod-configmaps-3095084e-518a-4bc0-8739-99cb444b0127": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037578296s +STEP: Saw pod success +Oct 27 15:16:25.911: INFO: Pod "pod-configmaps-3095084e-518a-4bc0-8739-99cb444b0127" satisfied condition "Succeeded or Failed" +Oct 27 15:16:25.925: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod pod-configmaps-3095084e-518a-4bc0-8739-99cb444b0127 container agnhost-container: +STEP: delete the pod +Oct 27 15:16:26.036: INFO: Waiting for pod pod-configmaps-3095084e-518a-4bc0-8739-99cb444b0127 to disappear +Oct 27 15:16:26.047: INFO: Pod pod-configmaps-3095084e-518a-4bc0-8739-99cb444b0127 no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:16:26.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-2659" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":346,"completed":220,"skipped":3782,"failed":0} +SS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with secret pod [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:16:26.082: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename subpath +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in subpath-3207 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] Atomic writer volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 +STEP: Setting up data +[It] should support subpaths with secret pod [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating pod pod-subpath-test-secret-gctb +STEP: Creating a pod to test atomic-volume-subpath +Oct 27 15:16:26.312: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-gctb" in namespace "subpath-3207" to be "Succeeded or Failed" +Oct 27 15:16:26.325: INFO: Pod "pod-subpath-test-secret-gctb": Phase="Pending", Reason="", readiness=false. Elapsed: 12.537299ms +Oct 27 15:16:28.338: INFO: Pod "pod-subpath-test-secret-gctb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025041219s +Oct 27 15:16:30.355: INFO: Pod "pod-subpath-test-secret-gctb": Phase="Running", Reason="", readiness=true. Elapsed: 4.042816847s +Oct 27 15:16:32.368: INFO: Pod "pod-subpath-test-secret-gctb": Phase="Running", Reason="", readiness=true. Elapsed: 6.055193864s +Oct 27 15:16:34.380: INFO: Pod "pod-subpath-test-secret-gctb": Phase="Running", Reason="", readiness=true. Elapsed: 8.067860683s +Oct 27 15:16:36.393: INFO: Pod "pod-subpath-test-secret-gctb": Phase="Running", Reason="", readiness=true. Elapsed: 10.0808829s +Oct 27 15:16:38.407: INFO: Pod "pod-subpath-test-secret-gctb": Phase="Running", Reason="", readiness=true. Elapsed: 12.094557743s +Oct 27 15:16:40.419: INFO: Pod "pod-subpath-test-secret-gctb": Phase="Running", Reason="", readiness=true. Elapsed: 14.106990957s +Oct 27 15:16:42.432: INFO: Pod "pod-subpath-test-secret-gctb": Phase="Running", Reason="", readiness=true. Elapsed: 16.119522012s +Oct 27 15:16:44.446: INFO: Pod "pod-subpath-test-secret-gctb": Phase="Running", Reason="", readiness=true. Elapsed: 18.133640122s +Oct 27 15:16:46.463: INFO: Pod "pod-subpath-test-secret-gctb": Phase="Running", Reason="", readiness=true. Elapsed: 20.150547203s +Oct 27 15:16:48.475: INFO: Pod "pod-subpath-test-secret-gctb": Phase="Running", Reason="", readiness=true. Elapsed: 22.162937297s +Oct 27 15:16:50.487: INFO: Pod "pod-subpath-test-secret-gctb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.174692134s +STEP: Saw pod success +Oct 27 15:16:50.487: INFO: Pod "pod-subpath-test-secret-gctb" satisfied condition "Succeeded or Failed" +Oct 27 15:16:50.499: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod pod-subpath-test-secret-gctb container test-container-subpath-secret-gctb: +STEP: delete the pod +Oct 27 15:16:50.586: INFO: Waiting for pod pod-subpath-test-secret-gctb to disappear +Oct 27 15:16:50.598: INFO: Pod pod-subpath-test-secret-gctb no longer exists +STEP: Deleting pod pod-subpath-test-secret-gctb +Oct 27 15:16:50.598: INFO: Deleting pod "pod-subpath-test-secret-gctb" in namespace "subpath-3207" +[AfterEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:16:50.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "subpath-3207" for this suite. +•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":346,"completed":221,"skipped":3784,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Networking Granular Checks: Pods + should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Networking + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:16:50.644: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename pod-network-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pod-network-test-7386 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Performing setup for networking test in namespace pod-network-test-7386 +STEP: creating a selector +STEP: Creating the service pods in kubernetes +Oct 27 15:16:50.831: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +Oct 27 15:16:50.903: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:16:52.916: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 15:16:54.916: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 15:16:56.917: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 15:16:58.917: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 15:17:00.917: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 15:17:02.916: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 15:17:04.916: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 15:17:06.917: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 15:17:08.916: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 15:17:10.916: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 15:17:12.918: INFO: The status of Pod netserver-0 is Running (Ready = true) +Oct 27 15:17:12.942: INFO: The status of Pod netserver-1 is Running (Ready = true) +STEP: Creating test pods +Oct 27 15:17:17.045: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 +Oct 27 15:17:17.045: INFO: Going to poll 100.96.0.90 on port 8083 at least 0 times, with a maximum of 34 tries before failing +Oct 27 15:17:17.056: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.0.90:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7386 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 15:17:17.056: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 15:17:17.444: INFO: Found all 1 expected endpoints: [netserver-0] +Oct 27 15:17:17.444: INFO: Going to poll 100.96.1.11 on port 8083 at least 0 times, with a maximum of 34 tries before failing +Oct 27 15:17:17.456: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.11:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7386 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 15:17:17.456: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 15:17:17.865: INFO: Found all 1 expected endpoints: [netserver-1] +[AfterEach] [sig-network] Networking + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:17:17.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pod-network-test-7386" for this suite. +•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":222,"skipped":3823,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:17:17.899: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-4055 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0666 on tmpfs +Oct 27 15:17:18.104: INFO: Waiting up to 5m0s for pod "pod-a6eef8d7-997f-4628-90a6-195bc72cac40" in namespace "emptydir-4055" to be "Succeeded or Failed" +Oct 27 15:17:18.115: INFO: Pod "pod-a6eef8d7-997f-4628-90a6-195bc72cac40": Phase="Pending", Reason="", readiness=false. Elapsed: 11.106616ms +Oct 27 15:17:20.128: INFO: Pod "pod-a6eef8d7-997f-4628-90a6-195bc72cac40": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023771022s +Oct 27 15:17:22.141: INFO: Pod "pod-a6eef8d7-997f-4628-90a6-195bc72cac40": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03708418s +STEP: Saw pod success +Oct 27 15:17:22.141: INFO: Pod "pod-a6eef8d7-997f-4628-90a6-195bc72cac40" satisfied condition "Succeeded or Failed" +Oct 27 15:17:22.153: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod pod-a6eef8d7-997f-4628-90a6-195bc72cac40 container test-container: +STEP: delete the pod +Oct 27 15:17:22.262: INFO: Waiting for pod pod-a6eef8d7-997f-4628-90a6-195bc72cac40 to disappear +Oct 27 15:17:22.273: INFO: Pod pod-a6eef8d7-997f-4628-90a6-195bc72cac40 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:17:22.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-4055" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":223,"skipped":3838,"failed":0} +SSSSSSS +------------------------------ +[sig-node] Probing container + with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:17:22.309: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-probe +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-6334 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 +[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:17:22.527: INFO: The status of Pod test-webserver-cba50e68-eb78-4cf0-98ce-d4eb63c8e972 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:17:24.541: INFO: The status of Pod test-webserver-cba50e68-eb78-4cf0-98ce-d4eb63c8e972 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:17:26.540: INFO: The status of Pod test-webserver-cba50e68-eb78-4cf0-98ce-d4eb63c8e972 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:17:28.541: INFO: The status of Pod test-webserver-cba50e68-eb78-4cf0-98ce-d4eb63c8e972 is Running (Ready = false) +Oct 27 15:17:30.540: INFO: The status of Pod test-webserver-cba50e68-eb78-4cf0-98ce-d4eb63c8e972 is Running (Ready = false) +Oct 27 15:17:32.541: INFO: The status of Pod test-webserver-cba50e68-eb78-4cf0-98ce-d4eb63c8e972 is Running (Ready = false) +Oct 27 15:17:34.541: INFO: The status of Pod test-webserver-cba50e68-eb78-4cf0-98ce-d4eb63c8e972 is Running (Ready = false) +Oct 27 15:17:36.541: INFO: The status of Pod test-webserver-cba50e68-eb78-4cf0-98ce-d4eb63c8e972 is Running (Ready = false) +Oct 27 15:17:38.543: INFO: The status of Pod test-webserver-cba50e68-eb78-4cf0-98ce-d4eb63c8e972 is Running (Ready = false) +Oct 27 15:17:40.541: INFO: The status of Pod test-webserver-cba50e68-eb78-4cf0-98ce-d4eb63c8e972 is Running (Ready = false) +Oct 27 15:17:42.540: INFO: The status of Pod test-webserver-cba50e68-eb78-4cf0-98ce-d4eb63c8e972 is Running (Ready = false) +Oct 27 15:17:44.540: INFO: The status of Pod test-webserver-cba50e68-eb78-4cf0-98ce-d4eb63c8e972 is Running (Ready = true) +Oct 27 15:17:44.552: INFO: Container started at 2021-10-27 15:17:25 +0000 UTC, pod became ready at 2021-10-27 15:17:42 +0000 UTC +[AfterEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:17:44.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-6334" for this suite. +•{"msg":"PASSED [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":346,"completed":224,"skipped":3845,"failed":0} + +------------------------------ +[sig-api-machinery] Garbage collector + should delete RS created by deployment when not orphaning [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:17:44.586: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename gc +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-4815 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should delete RS created by deployment when not orphaning [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the deployment +STEP: Wait for the Deployment to create new ReplicaSet +STEP: delete the deployment +STEP: wait for all rs to be garbage collected +STEP: Gathering metrics +Oct 27 15:17:44.869: INFO: For apiserver_request_total: +For apiserver_request_latency_seconds: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:17:44.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +W1027 15:17:44.869368 5768 metrics_grabber.go:151] Can't find kube-controller-manager pod. Grabbing metrics from kube-controller-manager is disabled. +STEP: Destroying namespace "gc-4815" for this suite. +•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":346,"completed":225,"skipped":3845,"failed":0} +SSSSSSS +------------------------------ +[sig-network] Services + should be able to change the type from NodePort to ExternalName [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:17:44.895: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-9656 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should be able to change the type from NodePort to ExternalName [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a service nodeport-service with the type=NodePort in namespace services-9656 +STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service +STEP: creating service externalsvc in namespace services-9656 +STEP: creating replication controller externalsvc in namespace services-9656 +I1027 15:17:45.133092 5768 runners.go:190] Created replication controller with name: externalsvc, namespace: services-9656, replica count: 2 +I1027 15:17:48.184410 5768 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +STEP: changing the NodePort service to type=ExternalName +Oct 27 15:17:48.228: INFO: Creating new exec pod +Oct 27 15:17:52.268: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-9656 exec execpodxlshb -- /bin/sh -x -c nslookup nodeport-service.services-9656.svc.cluster.local' +Oct 27 15:17:52.817: INFO: stderr: "+ nslookup nodeport-service.services-9656.svc.cluster.local\n" +Oct 27 15:17:52.817: INFO: stdout: "Server:\t\t100.64.0.10\nAddress:\t100.64.0.10#53\n\nnodeport-service.services-9656.svc.cluster.local\tcanonical name = externalsvc.services-9656.svc.cluster.local.\nName:\texternalsvc.services-9656.svc.cluster.local\nAddress: 100.68.114.105\n\n" +STEP: deleting ReplicationController externalsvc in namespace services-9656, will wait for the garbage collector to delete the pods +Oct 27 15:17:52.893: INFO: Deleting ReplicationController externalsvc took: 14.539788ms +Oct 27 15:17:52.994: INFO: Terminating ReplicationController externalsvc pods took: 100.936146ms +Oct 27 15:17:56.018: INFO: Cleaning up the NodePort to ExternalName test service +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:17:56.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-9656" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":346,"completed":226,"skipped":3852,"failed":0} +SSSSSS +------------------------------ +[sig-node] Variable Expansion + should succeed in writing subpaths in container [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:17:56.068: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename var-expansion +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in var-expansion-6745 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should succeed in writing subpaths in container [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pod +STEP: waiting for pod running +STEP: creating a file in subpath +Oct 27 15:18:00.314: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-6745 PodName:var-expansion-f3e96f2a-8487-4016-94d4-26e8f109f796 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 15:18:00.314: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: test for file in mounted path +Oct 27 15:18:00.774: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-6745 PodName:var-expansion-f3e96f2a-8487-4016-94d4-26e8f109f796 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 15:18:00.774: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: updating the annotation value +Oct 27 15:18:01.743: INFO: Successfully updated pod "var-expansion-f3e96f2a-8487-4016-94d4-26e8f109f796" +STEP: waiting for annotated pod running +STEP: deleting the pod gracefully +Oct 27 15:18:01.759: INFO: Deleting pod "var-expansion-f3e96f2a-8487-4016-94d4-26e8f109f796" in namespace "var-expansion-6745" +Oct 27 15:18:01.772: INFO: Wait up to 5m0s for pod "var-expansion-f3e96f2a-8487-4016-94d4-26e8f109f796" to be fully deleted +[AfterEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:18:35.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-6745" for this suite. +•{"msg":"PASSED [sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","total":346,"completed":227,"skipped":3858,"failed":0} +SSSS +------------------------------ +[sig-storage] EmptyDir volumes + volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:18:35.833: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-3683 +STEP: Waiting for a default service account to be provisioned in namespace +[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir volume type on tmpfs +Oct 27 15:18:36.038: INFO: Waiting up to 5m0s for pod "pod-d17a4bd6-40d6-4368-b61c-efad905dde4d" in namespace "emptydir-3683" to be "Succeeded or Failed" +Oct 27 15:18:36.050: INFO: Pod "pod-d17a4bd6-40d6-4368-b61c-efad905dde4d": Phase="Pending", Reason="", readiness=false. Elapsed: 11.517686ms +Oct 27 15:18:38.062: INFO: Pod "pod-d17a4bd6-40d6-4368-b61c-efad905dde4d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.024277618s +STEP: Saw pod success +Oct 27 15:18:38.062: INFO: Pod "pod-d17a4bd6-40d6-4368-b61c-efad905dde4d" satisfied condition "Succeeded or Failed" +Oct 27 15:18:38.074: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod pod-d17a4bd6-40d6-4368-b61c-efad905dde4d container test-container: +STEP: delete the pod +Oct 27 15:18:38.188: INFO: Waiting for pod pod-d17a4bd6-40d6-4368-b61c-efad905dde4d to disappear +Oct 27 15:18:38.200: INFO: Pod pod-d17a4bd6-40d6-4368-b61c-efad905dde4d no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:18:38.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-3683" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":228,"skipped":3862,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-scheduling] SchedulerPredicates [Serial] + validates that NodeSelector is respected if not matching [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:18:38.234: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename sched-pred +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-pred-9040 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 +Oct 27 15:18:38.419: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready +Oct 27 15:18:38.446: INFO: Waiting for terminating namespaces to be deleted... +Oct 27 15:18:38.458: INFO: +Logging pods the apiserver thinks is on node shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8 before test +Oct 27 15:18:38.479: INFO: addons-nginx-ingress-controller-76f55b7b5f-ffxv8 from kube-system started at 2021-10-27 14:09:38 +0000 UTC (1 container statuses recorded) +Oct 27 15:18:38.479: INFO: Container nginx-ingress-controller ready: true, restart count 0 +Oct 27 15:18:38.479: INFO: addons-nginx-ingress-nginx-ingress-k8s-backend-56d9d84c8c-w2blg from kube-system started at 2021-10-27 13:56:51 +0000 UTC (1 container statuses recorded) +Oct 27 15:18:38.479: INFO: Container nginx-ingress-nginx-ingress-k8s-backend ready: true, restart count 0 +Oct 27 15:18:38.479: INFO: apiserver-proxy-vdnm2 from kube-system started at 2021-10-27 13:56:14 +0000 UTC (2 container statuses recorded) +Oct 27 15:18:38.479: INFO: Container proxy ready: true, restart count 0 +Oct 27 15:18:38.479: INFO: Container sidecar ready: true, restart count 0 +Oct 27 15:18:38.479: INFO: calico-node-bmkxt from kube-system started at 2021-10-27 14:03:54 +0000 UTC (1 container statuses recorded) +Oct 27 15:18:38.479: INFO: Container calico-node ready: true, restart count 0 +Oct 27 15:18:38.479: INFO: calico-node-vertical-autoscaler-785b5f968-sbxt6 from kube-system started at 2021-10-27 13:56:51 +0000 UTC (1 container statuses recorded) +Oct 27 15:18:38.479: INFO: Container autoscaler ready: true, restart count 0 +Oct 27 15:18:38.479: INFO: calico-typha-deploy-546b97d4b5-kw64w from kube-system started at 2021-10-27 13:56:14 +0000 UTC (1 container statuses recorded) +Oct 27 15:18:38.479: INFO: Container calico-typha ready: true, restart count 0 +Oct 27 15:18:38.479: INFO: calico-typha-horizontal-autoscaler-5b58bb446c-p96rk from kube-system started at 2021-10-27 13:56:51 +0000 UTC (1 container statuses recorded) +Oct 27 15:18:38.479: INFO: Container autoscaler ready: true, restart count 0 +Oct 27 15:18:38.479: INFO: calico-typha-vertical-autoscaler-5c9655cddd-z7tgn from kube-system started at 2021-10-27 13:56:51 +0000 UTC (1 container statuses recorded) +Oct 27 15:18:38.479: INFO: Container autoscaler ready: true, restart count 0 +Oct 27 15:18:38.479: INFO: coredns-7649bdf444-cnjp5 from kube-system started at 2021-10-27 13:56:51 +0000 UTC (1 container statuses recorded) +Oct 27 15:18:38.479: INFO: Container coredns ready: true, restart count 0 +Oct 27 15:18:38.479: INFO: coredns-7649bdf444-x6nkv from kube-system started at 2021-10-27 13:56:51 +0000 UTC (1 container statuses recorded) +Oct 27 15:18:38.479: INFO: Container coredns ready: true, restart count 0 +Oct 27 15:18:38.479: INFO: csi-driver-node-disk-tb5lc from kube-system started at 2021-10-27 13:56:14 +0000 UTC (3 container statuses recorded) +Oct 27 15:18:38.479: INFO: Container csi-driver ready: true, restart count 0 +Oct 27 15:18:38.479: INFO: Container csi-liveness-probe ready: true, restart count 0 +Oct 27 15:18:38.479: INFO: Container csi-node-driver-registrar ready: true, restart count 0 +Oct 27 15:18:38.479: INFO: csi-driver-node-file-8vk78 from kube-system started at 2021-10-27 13:56:14 +0000 UTC (3 container statuses recorded) +Oct 27 15:18:38.479: INFO: Container csi-driver ready: true, restart count 0 +Oct 27 15:18:38.479: INFO: Container csi-liveness-probe ready: true, restart count 0 +Oct 27 15:18:38.479: INFO: Container csi-node-driver-registrar ready: true, restart count 0 +Oct 27 15:18:38.479: INFO: kube-proxy-7d5xq from kube-system started at 2021-10-27 14:56:47 +0000 UTC (2 container statuses recorded) +Oct 27 15:18:38.479: INFO: Container conntrack-fix ready: true, restart count 0 +Oct 27 15:18:38.479: INFO: Container kube-proxy ready: true, restart count 0 +Oct 27 15:18:38.479: INFO: metrics-server-5555d7587-mw896 from kube-system started at 2021-10-27 13:56:51 +0000 UTC (1 container statuses recorded) +Oct 27 15:18:38.479: INFO: Container metrics-server ready: true, restart count 0 +Oct 27 15:18:38.479: INFO: node-exporter-fg8qw from kube-system started at 2021-10-27 13:56:14 +0000 UTC (1 container statuses recorded) +Oct 27 15:18:38.479: INFO: Container node-exporter ready: true, restart count 0 +Oct 27 15:18:38.479: INFO: node-problem-detector-bxt7r from kube-system started at 2021-10-27 14:07:47 +0000 UTC (1 container statuses recorded) +Oct 27 15:18:38.479: INFO: Container node-problem-detector ready: true, restart count 0 +Oct 27 15:18:38.479: INFO: vpn-shoot-7f6446d489-9kghs from kube-system started at 2021-10-27 13:56:51 +0000 UTC (1 container statuses recorded) +Oct 27 15:18:38.479: INFO: Container vpn-shoot ready: true, restart count 0 +Oct 27 15:18:38.479: INFO: dashboard-metrics-scraper-7ccbfc448f-jcrjk from kubernetes-dashboard started at 2021-10-27 13:56:51 +0000 UTC (1 container statuses recorded) +Oct 27 15:18:38.479: INFO: Container dashboard-metrics-scraper ready: true, restart count 0 +Oct 27 15:18:38.479: INFO: kubernetes-dashboard-65d5f5c55-sf9qc from kubernetes-dashboard started at 2021-10-27 13:56:51 +0000 UTC (1 container statuses recorded) +Oct 27 15:18:38.479: INFO: Container kubernetes-dashboard ready: true, restart count 2 +Oct 27 15:18:38.479: INFO: +Logging pods the apiserver thinks is on node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 before test +Oct 27 15:18:38.503: INFO: apiserver-proxy-8bg6p from kube-system started at 2021-10-27 13:56:32 +0000 UTC (2 container statuses recorded) +Oct 27 15:18:38.503: INFO: Container proxy ready: true, restart count 0 +Oct 27 15:18:38.503: INFO: Container sidecar ready: true, restart count 0 +Oct 27 15:18:38.503: INFO: blackbox-exporter-65c549b94c-vc8rp from kube-system started at 2021-10-27 14:08:45 +0000 UTC (1 container statuses recorded) +Oct 27 15:18:38.503: INFO: Container blackbox-exporter ready: true, restart count 0 +Oct 27 15:18:38.503: INFO: calico-node-v56vf from kube-system started at 2021-10-27 14:03:54 +0000 UTC (1 container statuses recorded) +Oct 27 15:18:38.503: INFO: Container calico-node ready: true, restart count 0 +Oct 27 15:18:38.503: INFO: csi-driver-node-disk-h74nf from kube-system started at 2021-10-27 13:56:32 +0000 UTC (3 container statuses recorded) +Oct 27 15:18:38.503: INFO: Container csi-driver ready: true, restart count 0 +Oct 27 15:18:38.503: INFO: Container csi-liveness-probe ready: true, restart count 0 +Oct 27 15:18:38.503: INFO: Container csi-node-driver-registrar ready: true, restart count 0 +Oct 27 15:18:38.503: INFO: csi-driver-node-file-q9zq2 from kube-system started at 2021-10-27 13:56:32 +0000 UTC (3 container statuses recorded) +Oct 27 15:18:38.503: INFO: Container csi-driver ready: true, restart count 0 +Oct 27 15:18:38.503: INFO: Container csi-liveness-probe ready: true, restart count 0 +Oct 27 15:18:38.503: INFO: Container csi-node-driver-registrar ready: true, restart count 0 +Oct 27 15:18:38.503: INFO: kube-proxy-mlg7s from kube-system started at 2021-10-27 14:56:47 +0000 UTC (2 container statuses recorded) +Oct 27 15:18:38.503: INFO: Container conntrack-fix ready: true, restart count 0 +Oct 27 15:18:38.503: INFO: Container kube-proxy ready: true, restart count 0 +Oct 27 15:18:38.503: INFO: node-exporter-fs6fl from kube-system started at 2021-10-27 13:56:32 +0000 UTC (1 container statuses recorded) +Oct 27 15:18:38.503: INFO: Container node-exporter ready: true, restart count 0 +Oct 27 15:18:38.503: INFO: node-problem-detector-srvcj from kube-system started at 2021-10-27 14:07:47 +0000 UTC (1 container statuses recorded) +Oct 27 15:18:38.503: INFO: Container node-problem-detector ready: true, restart count 0 +[It] validates that NodeSelector is respected if not matching [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Trying to schedule Pod with nonempty NodeSelector. +STEP: Considering event: +Type = [Warning], Name = [restricted-pod.16b1ec4479d83c36], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match Pod's node affinity/selector.] +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:18:39.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-pred-9040" for this suite. +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 +•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":346,"completed":229,"skipped":3878,"failed":0} +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] InitContainer [NodeConformance] + should not start app containers if init containers fail on a RestartAlways pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:18:39.609: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename init-container +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in init-container-3376 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 +[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pod +Oct 27 15:18:39.792: INFO: PodSpec: initContainers in spec.initContainers +Oct 27 15:19:28.166: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-80e761d6-190e-4843-8aa9-b816bae99dda", GenerateName:"", Namespace:"init-container-3376", SelfLink:"", UID:"9da0c1e3-54f5-43c4-94c1-529bd3858515", ResourceVersion:"39939", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63770944719, loc:(*time.Location)(0xa09bc80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"792199441"}, Annotations:map[string]string{"cni.projectcalico.org/containerID":"942dda754129e274a04146b68bb03a7303bee6e12a0d7cff16ac5ffe8ab72a68", "cni.projectcalico.org/podIP":"100.96.1.19/32", "cni.projectcalico.org/podIPs":"100.96.1.19/32", "kubernetes.io/psp":"e2e-test-privileged-psp"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003a7c0f0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003a7c108), Subresource:""}, v1.ManagedFieldsEntry{Manager:"calico", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003a7c120), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003a7c138), Subresource:"status"}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003a7c150), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003a7c168), Subresource:"status"}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-api-access-77vdk", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc006293f00), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"KUBERNETES_SERVICE_HOST", Value:"api.tmgxs-skc.it.internal.staging.k8s.ondemand.com", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-77vdk", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"KUBERNETES_SERVICE_HOST", Value:"api.tmgxs-skc.it.internal.staging.k8s.ondemand.com", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-77vdk", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.5", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"KUBERNETES_SERVICE_HOST", Value:"api.tmgxs-skc.it.internal.staging.k8s.ondemand.com", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-77vdk", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc005147be8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0011a4fc0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc005147cc0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc005147ce0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc005147ce8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc005147cec), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc00591ff40), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944719, loc:(*time.Location)(0xa09bc80)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944719, loc:(*time.Location)(0xa09bc80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944719, loc:(*time.Location)(0xa09bc80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944719, loc:(*time.Location)(0xa09bc80)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.250.0.4", PodIP:"100.96.1.19", PodIPs:[]v1.PodIP{v1.PodIP{IP:"100.96.1.19"}}, StartTime:(*v1.Time)(0xc003a7c198), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc003a7c1b0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0011a5110)}, Ready:false, RestartCount:3, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592", ContainerID:"containerd://714060daccdc18f432b9369b838c4fe7cc2450b290a68800efd76704859a96cf", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc006293fe0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc006293fc0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.5", ImageID:"", ContainerID:"", Started:(*bool)(0xc005147e1f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} +[AfterEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:19:28.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "init-container-3376" for this suite. +•{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":346,"completed":230,"skipped":3897,"failed":0} +SSSSSSSS +------------------------------ +[sig-network] Proxy version v1 + A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] version v1 + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:19:28.200: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename proxy +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in proxy-9343 +STEP: Waiting for a default service account to be provisioned in namespace +[It] A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:19:28.389: INFO: Creating pod... +Oct 27 15:19:28.419: INFO: Pod Quantity: 1 Status: Pending +Oct 27 15:19:29.436: INFO: Pod Quantity: 1 Status: Pending +Oct 27 15:19:30.431: INFO: Pod Quantity: 1 Status: Pending +Oct 27 15:19:31.433: INFO: Pod Status: Running +Oct 27 15:19:31.433: INFO: Creating service... +Oct 27 15:19:31.452: INFO: Starting http.Client for https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com/api/v1/namespaces/proxy-9343/pods/agnhost/proxy/some/path/with/DELETE +Oct 27 15:19:31.563: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE +Oct 27 15:19:31.563: INFO: Starting http.Client for https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com/api/v1/namespaces/proxy-9343/pods/agnhost/proxy/some/path/with/GET +Oct 27 15:19:31.609: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET +Oct 27 15:19:31.609: INFO: Starting http.Client for https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com/api/v1/namespaces/proxy-9343/pods/agnhost/proxy/some/path/with/HEAD +Oct 27 15:19:31.654: INFO: http.Client request:HEAD | StatusCode:200 +Oct 27 15:19:31.654: INFO: Starting http.Client for https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com/api/v1/namespaces/proxy-9343/pods/agnhost/proxy/some/path/with/OPTIONS +Oct 27 15:19:31.771: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS +Oct 27 15:19:31.772: INFO: Starting http.Client for https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com/api/v1/namespaces/proxy-9343/pods/agnhost/proxy/some/path/with/PATCH +Oct 27 15:19:31.806: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH +Oct 27 15:19:31.806: INFO: Starting http.Client for https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com/api/v1/namespaces/proxy-9343/pods/agnhost/proxy/some/path/with/POST +Oct 27 15:19:31.835: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST +Oct 27 15:19:31.835: INFO: Starting http.Client for https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com/api/v1/namespaces/proxy-9343/pods/agnhost/proxy/some/path/with/PUT +Oct 27 15:19:31.873: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT +Oct 27 15:19:31.873: INFO: Starting http.Client for https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com/api/v1/namespaces/proxy-9343/services/test-service/proxy/some/path/with/DELETE +Oct 27 15:19:31.908: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE +Oct 27 15:19:31.908: INFO: Starting http.Client for https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com/api/v1/namespaces/proxy-9343/services/test-service/proxy/some/path/with/GET +Oct 27 15:19:31.938: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET +Oct 27 15:19:31.939: INFO: Starting http.Client for https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com/api/v1/namespaces/proxy-9343/services/test-service/proxy/some/path/with/HEAD +Oct 27 15:19:31.973: INFO: http.Client request:HEAD | StatusCode:200 +Oct 27 15:19:31.973: INFO: Starting http.Client for https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com/api/v1/namespaces/proxy-9343/services/test-service/proxy/some/path/with/OPTIONS +Oct 27 15:19:32.004: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS +Oct 27 15:19:32.004: INFO: Starting http.Client for https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com/api/v1/namespaces/proxy-9343/services/test-service/proxy/some/path/with/PATCH +Oct 27 15:19:32.036: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH +Oct 27 15:19:32.036: INFO: Starting http.Client for https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com/api/v1/namespaces/proxy-9343/services/test-service/proxy/some/path/with/POST +Oct 27 15:19:32.067: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST +Oct 27 15:19:32.067: INFO: Starting http.Client for https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com/api/v1/namespaces/proxy-9343/services/test-service/proxy/some/path/with/PUT +Oct 27 15:19:32.098: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT +[AfterEach] version v1 + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:19:32.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "proxy-9343" for this suite. +•{"msg":"PASSED [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]","total":346,"completed":231,"skipped":3905,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Pods + should be submitted and removed [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:19:32.136: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename pods +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-9873 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:188 +[It] should be submitted and removed [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pod +STEP: setting up watch +STEP: submitting the pod to kubernetes +Oct 27 15:19:32.373: INFO: observed the pod list +STEP: verifying the pod is in kubernetes +STEP: verifying pod creation was observed +STEP: deleting the pod gracefully +STEP: verifying pod deletion was observed +[AfterEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:19:39.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-9873" for this suite. +•{"msg":"PASSED [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]","total":346,"completed":232,"skipped":3963,"failed":0} +SSSS +------------------------------ +[sig-cli] Kubectl client Update Demo + should scale a replication controller [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:19:39.384: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-9791 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[BeforeEach] Update Demo + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:296 +[It] should scale a replication controller [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a replication controller +Oct 27 15:19:39.571: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9791 create -f -' +Oct 27 15:19:39.836: INFO: stderr: "" +Oct 27 15:19:39.837: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" +STEP: waiting for all containers in name=update-demo pods to come up. +Oct 27 15:19:39.837: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9791 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 27 15:19:39.946: INFO: stderr: "" +Oct 27 15:19:39.946: INFO: stdout: "update-demo-nautilus-7kj4m update-demo-nautilus-m9l6v " +Oct 27 15:19:39.947: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9791 get pods update-demo-nautilus-7kj4m -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 27 15:19:40.054: INFO: stderr: "" +Oct 27 15:19:40.055: INFO: stdout: "" +Oct 27 15:19:40.055: INFO: update-demo-nautilus-7kj4m is created but not running +Oct 27 15:19:45.056: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9791 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 27 15:19:45.164: INFO: stderr: "" +Oct 27 15:19:45.165: INFO: stdout: "update-demo-nautilus-7kj4m update-demo-nautilus-m9l6v " +Oct 27 15:19:45.165: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9791 get pods update-demo-nautilus-7kj4m -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 27 15:19:45.259: INFO: stderr: "" +Oct 27 15:19:45.259: INFO: stdout: "true" +Oct 27 15:19:45.259: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9791 get pods update-demo-nautilus-7kj4m -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Oct 27 15:19:45.347: INFO: stderr: "" +Oct 27 15:19:45.347: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" +Oct 27 15:19:45.347: INFO: validating pod update-demo-nautilus-7kj4m +Oct 27 15:19:45.459: INFO: got data: { + "image": "nautilus.jpg" +} + +Oct 27 15:19:45.460: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Oct 27 15:19:45.460: INFO: update-demo-nautilus-7kj4m is verified up and running +Oct 27 15:19:45.460: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9791 get pods update-demo-nautilus-m9l6v -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 27 15:19:45.551: INFO: stderr: "" +Oct 27 15:19:45.551: INFO: stdout: "true" +Oct 27 15:19:45.551: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9791 get pods update-demo-nautilus-m9l6v -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Oct 27 15:19:45.641: INFO: stderr: "" +Oct 27 15:19:45.641: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" +Oct 27 15:19:45.641: INFO: validating pod update-demo-nautilus-m9l6v +Oct 27 15:19:45.709: INFO: got data: { + "image": "nautilus.jpg" +} + +Oct 27 15:19:45.709: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Oct 27 15:19:45.709: INFO: update-demo-nautilus-m9l6v is verified up and running +STEP: scaling down the replication controller +Oct 27 15:19:45.711: INFO: scanned /root for discovery docs: +Oct 27 15:19:45.711: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9791 scale rc update-demo-nautilus --replicas=1 --timeout=5m' +Oct 27 15:19:46.861: INFO: stderr: "" +Oct 27 15:19:46.861: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" +STEP: waiting for all containers in name=update-demo pods to come up. +Oct 27 15:19:46.861: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9791 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 27 15:19:46.956: INFO: stderr: "" +Oct 27 15:19:46.956: INFO: stdout: "update-demo-nautilus-7kj4m update-demo-nautilus-m9l6v " +STEP: Replicas for name=update-demo: expected=1 actual=2 +Oct 27 15:19:51.958: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9791 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 27 15:19:52.046: INFO: stderr: "" +Oct 27 15:19:52.046: INFO: stdout: "update-demo-nautilus-m9l6v " +Oct 27 15:19:52.046: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9791 get pods update-demo-nautilus-m9l6v -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 27 15:19:52.142: INFO: stderr: "" +Oct 27 15:19:52.142: INFO: stdout: "true" +Oct 27 15:19:52.142: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9791 get pods update-demo-nautilus-m9l6v -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Oct 27 15:19:52.232: INFO: stderr: "" +Oct 27 15:19:52.232: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" +Oct 27 15:19:52.232: INFO: validating pod update-demo-nautilus-m9l6v +Oct 27 15:19:52.319: INFO: got data: { + "image": "nautilus.jpg" +} + +Oct 27 15:19:52.319: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Oct 27 15:19:52.319: INFO: update-demo-nautilus-m9l6v is verified up and running +STEP: scaling up the replication controller +Oct 27 15:19:52.322: INFO: scanned /root for discovery docs: +Oct 27 15:19:52.322: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9791 scale rc update-demo-nautilus --replicas=2 --timeout=5m' +Oct 27 15:19:53.461: INFO: stderr: "" +Oct 27 15:19:53.461: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" +STEP: waiting for all containers in name=update-demo pods to come up. +Oct 27 15:19:53.461: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9791 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 27 15:19:53.555: INFO: stderr: "" +Oct 27 15:19:53.555: INFO: stdout: "update-demo-nautilus-6zznd update-demo-nautilus-m9l6v " +Oct 27 15:19:53.556: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9791 get pods update-demo-nautilus-6zznd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 27 15:19:53.647: INFO: stderr: "" +Oct 27 15:19:53.647: INFO: stdout: "" +Oct 27 15:19:53.647: INFO: update-demo-nautilus-6zznd is created but not running +Oct 27 15:19:58.651: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9791 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 27 15:19:58.762: INFO: stderr: "" +Oct 27 15:19:58.762: INFO: stdout: "update-demo-nautilus-6zznd update-demo-nautilus-m9l6v " +Oct 27 15:19:58.762: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9791 get pods update-demo-nautilus-6zznd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 27 15:19:58.862: INFO: stderr: "" +Oct 27 15:19:58.862: INFO: stdout: "true" +Oct 27 15:19:58.862: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9791 get pods update-demo-nautilus-6zznd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Oct 27 15:19:58.956: INFO: stderr: "" +Oct 27 15:19:58.956: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" +Oct 27 15:19:58.956: INFO: validating pod update-demo-nautilus-6zznd +Oct 27 15:19:59.072: INFO: got data: { + "image": "nautilus.jpg" +} + +Oct 27 15:19:59.072: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Oct 27 15:19:59.072: INFO: update-demo-nautilus-6zznd is verified up and running +Oct 27 15:19:59.072: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9791 get pods update-demo-nautilus-m9l6v -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 27 15:19:59.175: INFO: stderr: "" +Oct 27 15:19:59.175: INFO: stdout: "true" +Oct 27 15:19:59.175: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9791 get pods update-demo-nautilus-m9l6v -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Oct 27 15:19:59.274: INFO: stderr: "" +Oct 27 15:19:59.274: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" +Oct 27 15:19:59.274: INFO: validating pod update-demo-nautilus-m9l6v +Oct 27 15:19:59.360: INFO: got data: { + "image": "nautilus.jpg" +} + +Oct 27 15:19:59.360: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Oct 27 15:19:59.360: INFO: update-demo-nautilus-m9l6v is verified up and running +STEP: using delete to clean up resources +Oct 27 15:19:59.360: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9791 delete --grace-period=0 --force -f -' +Oct 27 15:19:59.479: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Oct 27 15:19:59.479: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" +Oct 27 15:19:59.479: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9791 get rc,svc -l name=update-demo --no-headers' +Oct 27 15:19:59.593: INFO: stderr: "No resources found in kubectl-9791 namespace.\n" +Oct 27 15:19:59.593: INFO: stdout: "" +Oct 27 15:19:59.593: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9791 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' +Oct 27 15:19:59.699: INFO: stderr: "" +Oct 27 15:19:59.699: INFO: stdout: "" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:19:59.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-9791" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":346,"completed":233,"skipped":3967,"failed":0} +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + custom resource defaulting for requests and from storage works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:19:59.733: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename custom-resource-definition +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in custom-resource-definition-600 +STEP: Waiting for a default service account to be provisioned in namespace +[It] custom resource defaulting for requests and from storage works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:19:59.931: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:20:02.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "custom-resource-definition-600" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":346,"completed":234,"skipped":3986,"failed":0} +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicaSet + should serve a basic image on each replica with a public image [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:20:02.814: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename replicaset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replicaset-9200 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should serve a basic image on each replica with a public image [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:20:03.057: INFO: Creating ReplicaSet my-hostname-basic-9bb2c675-8d96-4f58-809c-c99fe3623e45 +Oct 27 15:20:03.085: INFO: Pod name my-hostname-basic-9bb2c675-8d96-4f58-809c-c99fe3623e45: Found 1 pods out of 1 +Oct 27 15:20:03.085: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-9bb2c675-8d96-4f58-809c-c99fe3623e45" is running +Oct 27 15:20:07.126: INFO: Pod "my-hostname-basic-9bb2c675-8d96-4f58-809c-c99fe3623e45-49qjx" is running (conditions: []) +Oct 27 15:20:07.126: INFO: Trying to dial the pod +Oct 27 15:20:12.224: INFO: Controller my-hostname-basic-9bb2c675-8d96-4f58-809c-c99fe3623e45: Got expected result from replica 1 [my-hostname-basic-9bb2c675-8d96-4f58-809c-c99fe3623e45-49qjx]: "my-hostname-basic-9bb2c675-8d96-4f58-809c-c99fe3623e45-49qjx", 1 of 1 required successes so far +[AfterEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:20:12.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replicaset-9200" for this suite. +•{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":346,"completed":235,"skipped":4007,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:20:12.258: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-526 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name configmap-test-volume-map-cc1c0db7-4e64-4980-9653-4c4f8dbdf6b1 +STEP: Creating a pod to test consume configMaps +Oct 27 15:20:12.477: INFO: Waiting up to 5m0s for pod "pod-configmaps-4e438a5a-45db-4d29-8588-646dc7e7ca5b" in namespace "configmap-526" to be "Succeeded or Failed" +Oct 27 15:20:12.490: INFO: Pod "pod-configmaps-4e438a5a-45db-4d29-8588-646dc7e7ca5b": Phase="Pending", Reason="", readiness=false. Elapsed: 13.303342ms +Oct 27 15:20:14.504: INFO: Pod "pod-configmaps-4e438a5a-45db-4d29-8588-646dc7e7ca5b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.02731393s +STEP: Saw pod success +Oct 27 15:20:14.504: INFO: Pod "pod-configmaps-4e438a5a-45db-4d29-8588-646dc7e7ca5b" satisfied condition "Succeeded or Failed" +Oct 27 15:20:14.515: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod pod-configmaps-4e438a5a-45db-4d29-8588-646dc7e7ca5b container agnhost-container: +STEP: delete the pod +Oct 27 15:20:14.633: INFO: Waiting for pod pod-configmaps-4e438a5a-45db-4d29-8588-646dc7e7ca5b to disappear +Oct 27 15:20:14.644: INFO: Pod pod-configmaps-4e438a5a-45db-4d29-8588-646dc7e7ca5b no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:20:14.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-526" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":346,"completed":236,"skipped":4031,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Security Context When creating a pod with readOnlyRootFilesystem + should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:20:14.680: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename security-context-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-test-8153 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 +[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:20:14.888: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-2f47744e-ba7b-4784-a066-bc9c4bfaa203" in namespace "security-context-test-8153" to be "Succeeded or Failed" +Oct 27 15:20:14.899: INFO: Pod "busybox-readonly-false-2f47744e-ba7b-4784-a066-bc9c4bfaa203": Phase="Pending", Reason="", readiness=false. Elapsed: 11.642395ms +Oct 27 15:20:16.912: INFO: Pod "busybox-readonly-false-2f47744e-ba7b-4784-a066-bc9c4bfaa203": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024255944s +Oct 27 15:20:18.925: INFO: Pod "busybox-readonly-false-2f47744e-ba7b-4784-a066-bc9c4bfaa203": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036888477s +Oct 27 15:20:18.925: INFO: Pod "busybox-readonly-false-2f47744e-ba7b-4784-a066-bc9c4bfaa203" satisfied condition "Succeeded or Failed" +[AfterEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:20:18.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "security-context-test-8153" for this suite. +•{"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":346,"completed":237,"skipped":4054,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Downward API + should provide host IP as an env var [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:20:18.961: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-9847 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide host IP as an env var [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward api env vars +Oct 27 15:20:19.178: INFO: Waiting up to 5m0s for pod "downward-api-7bc98a80-62b0-4630-b349-b25afb2cf21e" in namespace "downward-api-9847" to be "Succeeded or Failed" +Oct 27 15:20:19.189: INFO: Pod "downward-api-7bc98a80-62b0-4630-b349-b25afb2cf21e": Phase="Pending", Reason="", readiness=false. Elapsed: 11.047527ms +Oct 27 15:20:21.202: INFO: Pod "downward-api-7bc98a80-62b0-4630-b349-b25afb2cf21e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023646471s +Oct 27 15:20:23.215: INFO: Pod "downward-api-7bc98a80-62b0-4630-b349-b25afb2cf21e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036624682s +STEP: Saw pod success +Oct 27 15:20:23.215: INFO: Pod "downward-api-7bc98a80-62b0-4630-b349-b25afb2cf21e" satisfied condition "Succeeded or Failed" +Oct 27 15:20:23.227: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod downward-api-7bc98a80-62b0-4630-b349-b25afb2cf21e container dapi-container: +STEP: delete the pod +Oct 27 15:20:23.306: INFO: Waiting for pod downward-api-7bc98a80-62b0-4630-b349-b25afb2cf21e to disappear +Oct 27 15:20:23.317: INFO: Pod downward-api-7bc98a80-62b0-4630-b349-b25afb2cf21e no longer exists +[AfterEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:20:23.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-9847" for this suite. +•{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":346,"completed":238,"skipped":4102,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should deny crd creation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:20:23.351: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-2638 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 27 15:20:24.548: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944824, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944824, loc:(*time.Location)(0xa09bc80)}}, Reason:"NewReplicaSetCreated", Message:"Created new replica set \"sample-webhook-deployment-78988fc6cd\""}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944824, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944824, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 27 15:20:27.582: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should deny crd creation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Registering the crd webhook via the AdmissionRegistration API +STEP: Creating a custom resource definition that should be denied by the webhook +Oct 27 15:20:27.797: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:20:28.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-2638" for this suite. +STEP: Destroying namespace "webhook-2638-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":346,"completed":239,"skipped":4112,"failed":0} +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + should have a working scale subresource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:20:28.116: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename statefulset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-2997 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:92 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:107 +STEP: Creating service test in namespace statefulset-2997 +[It] should have a working scale subresource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating statefulset ss in namespace statefulset-2997 +Oct 27 15:20:28.337: INFO: Found 0 stateful pods, waiting for 1 +Oct 27 15:20:38.353: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: getting scale subresource +STEP: updating a scale subresource +STEP: verifying the statefulset Spec.Replicas was modified +STEP: Patch a scale subresource +STEP: verifying the statefulset Spec.Replicas was modified +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118 +Oct 27 15:20:38.441: INFO: Deleting all statefulset in ns statefulset-2997 +Oct 27 15:20:38.453: INFO: Scaling statefulset ss to 0 +Oct 27 15:20:48.502: INFO: Waiting for statefulset status.replicas updated to 0 +Oct 27 15:20:48.513: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:20:48.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-2997" for this suite. +•{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":346,"completed":240,"skipped":4133,"failed":0} + +------------------------------ +[sig-storage] Projected downwardAPI + should provide container's memory limit [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:20:48.581: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-7679 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should provide container's memory limit [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 27 15:20:48.790: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4a6a8e59-9b9d-416e-9e91-a37c458037e2" in namespace "projected-7679" to be "Succeeded or Failed" +Oct 27 15:20:48.806: INFO: Pod "downwardapi-volume-4a6a8e59-9b9d-416e-9e91-a37c458037e2": Phase="Pending", Reason="", readiness=false. Elapsed: 15.884892ms +Oct 27 15:20:50.819: INFO: Pod "downwardapi-volume-4a6a8e59-9b9d-416e-9e91-a37c458037e2": Phase="Running", Reason="", readiness=true. Elapsed: 2.02952337s +Oct 27 15:20:52.832: INFO: Pod "downwardapi-volume-4a6a8e59-9b9d-416e-9e91-a37c458037e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042441715s +STEP: Saw pod success +Oct 27 15:20:52.832: INFO: Pod "downwardapi-volume-4a6a8e59-9b9d-416e-9e91-a37c458037e2" satisfied condition "Succeeded or Failed" +Oct 27 15:20:52.849: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod downwardapi-volume-4a6a8e59-9b9d-416e-9e91-a37c458037e2 container client-container: +STEP: delete the pod +Oct 27 15:20:52.958: INFO: Waiting for pod downwardapi-volume-4a6a8e59-9b9d-416e-9e91-a37c458037e2 to disappear +Oct 27 15:20:52.969: INFO: Pod downwardapi-volume-4a6a8e59-9b9d-416e-9e91-a37c458037e2 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:20:52.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-7679" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":346,"completed":241,"skipped":4133,"failed":0} +S +------------------------------ +[sig-network] Services + should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:20:53.002: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-1755 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating service in namespace services-1755 +STEP: creating service affinity-clusterip in namespace services-1755 +STEP: creating replication controller affinity-clusterip in namespace services-1755 +I1027 15:20:53.220907 5768 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-1755, replica count: 3 +I1027 15:20:56.272555 5768 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Oct 27 15:20:56.295: INFO: Creating new exec pod +Oct 27 15:21:01.336: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-1755 exec execpod-affinityk6cbr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' +Oct 27 15:21:01.884: INFO: stderr: "+ nc -v -t -w 2 affinity-clusterip 80\n+ echo hostName\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" +Oct 27 15:21:01.884: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 15:21:01.884: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-1755 exec execpod-affinityk6cbr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.66.154.76 80' +Oct 27 15:21:02.373: INFO: stderr: "+ nc -v -t -w 2 100.66.154.76 80\n+ echo hostName\nConnection to 100.66.154.76 80 port [tcp/http] succeeded!\n" +Oct 27 15:21:02.373: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 15:21:02.373: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-1755 exec execpod-affinityk6cbr -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://100.66.154.76:80/ ; done' +Oct 27 15:21:02.946: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.154.76:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.154.76:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.154.76:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.154.76:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.154.76:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.154.76:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.154.76:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.154.76:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.154.76:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.154.76:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.154.76:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.154.76:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.154.76:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.154.76:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.154.76:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.154.76:80/\n" +Oct 27 15:21:02.946: INFO: stdout: "\naffinity-clusterip-hk6vx\naffinity-clusterip-hk6vx\naffinity-clusterip-hk6vx\naffinity-clusterip-hk6vx\naffinity-clusterip-hk6vx\naffinity-clusterip-hk6vx\naffinity-clusterip-hk6vx\naffinity-clusterip-hk6vx\naffinity-clusterip-hk6vx\naffinity-clusterip-hk6vx\naffinity-clusterip-hk6vx\naffinity-clusterip-hk6vx\naffinity-clusterip-hk6vx\naffinity-clusterip-hk6vx\naffinity-clusterip-hk6vx\naffinity-clusterip-hk6vx" +Oct 27 15:21:02.946: INFO: Received response from host: affinity-clusterip-hk6vx +Oct 27 15:21:02.946: INFO: Received response from host: affinity-clusterip-hk6vx +Oct 27 15:21:02.946: INFO: Received response from host: affinity-clusterip-hk6vx +Oct 27 15:21:02.946: INFO: Received response from host: affinity-clusterip-hk6vx +Oct 27 15:21:02.946: INFO: Received response from host: affinity-clusterip-hk6vx +Oct 27 15:21:02.946: INFO: Received response from host: affinity-clusterip-hk6vx +Oct 27 15:21:02.946: INFO: Received response from host: affinity-clusterip-hk6vx +Oct 27 15:21:02.946: INFO: Received response from host: affinity-clusterip-hk6vx +Oct 27 15:21:02.946: INFO: Received response from host: affinity-clusterip-hk6vx +Oct 27 15:21:02.946: INFO: Received response from host: affinity-clusterip-hk6vx +Oct 27 15:21:02.946: INFO: Received response from host: affinity-clusterip-hk6vx +Oct 27 15:21:02.946: INFO: Received response from host: affinity-clusterip-hk6vx +Oct 27 15:21:02.946: INFO: Received response from host: affinity-clusterip-hk6vx +Oct 27 15:21:02.946: INFO: Received response from host: affinity-clusterip-hk6vx +Oct 27 15:21:02.946: INFO: Received response from host: affinity-clusterip-hk6vx +Oct 27 15:21:02.946: INFO: Received response from host: affinity-clusterip-hk6vx +Oct 27 15:21:02.946: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-clusterip in namespace services-1755, will wait for the garbage collector to delete the pods +Oct 27 15:21:03.041: INFO: Deleting ReplicationController affinity-clusterip took: 12.753161ms +Oct 27 15:21:03.142: INFO: Terminating ReplicationController affinity-clusterip pods took: 100.564952ms +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:21:05.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-1755" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":346,"completed":242,"skipped":4134,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide container's memory request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:21:05.998: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-3672 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should provide container's memory request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 27 15:21:06.206: INFO: Waiting up to 5m0s for pod "downwardapi-volume-feb35981-95ef-4ed5-8c2f-b5c5b61cc8cc" in namespace "downward-api-3672" to be "Succeeded or Failed" +Oct 27 15:21:06.217: INFO: Pod "downwardapi-volume-feb35981-95ef-4ed5-8c2f-b5c5b61cc8cc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.995286ms +Oct 27 15:21:08.230: INFO: Pod "downwardapi-volume-feb35981-95ef-4ed5-8c2f-b5c5b61cc8cc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02436216s +Oct 27 15:21:10.242: INFO: Pod "downwardapi-volume-feb35981-95ef-4ed5-8c2f-b5c5b61cc8cc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036468218s +STEP: Saw pod success +Oct 27 15:21:10.242: INFO: Pod "downwardapi-volume-feb35981-95ef-4ed5-8c2f-b5c5b61cc8cc" satisfied condition "Succeeded or Failed" +Oct 27 15:21:10.253: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod downwardapi-volume-feb35981-95ef-4ed5-8c2f-b5c5b61cc8cc container client-container: +STEP: delete the pod +Oct 27 15:21:10.362: INFO: Waiting for pod downwardapi-volume-feb35981-95ef-4ed5-8c2f-b5c5b61cc8cc to disappear +Oct 27 15:21:10.372: INFO: Pod downwardapi-volume-feb35981-95ef-4ed5-8c2f-b5c5b61cc8cc no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:21:10.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-3672" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":346,"completed":243,"skipped":4176,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for multiple CRDs of same group and version but different kinds [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:21:10.406: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-6234 +STEP: Waiting for a default service account to be provisioned in namespace +[It] works for multiple CRDs of same group and version but different kinds [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation +Oct 27 15:21:10.592: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 15:21:14.786: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:21:28.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-6234" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":346,"completed":244,"skipped":4187,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-auth] ServiceAccounts + should run through the lifecycle of a ServiceAccount [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:21:28.617: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename svcaccounts +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in svcaccounts-8254 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should run through the lifecycle of a ServiceAccount [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a ServiceAccount +STEP: watching for the ServiceAccount to be added +STEP: patching the ServiceAccount +STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) +STEP: deleting the ServiceAccount +[AfterEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:21:28.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svcaccounts-8254" for this suite. +•{"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":346,"completed":245,"skipped":4221,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-apps] Deployment + RollingUpdateDeployment should delete old pods and create new ones [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:21:28.906: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename deployment +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in deployment-5510 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 +[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:21:29.093: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) +Oct 27 15:21:29.120: INFO: Pod name sample-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running +Oct 27 15:21:33.144: INFO: Creating deployment "test-rolling-update-deployment" +Oct 27 15:21:33.156: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has +Oct 27 15:21:33.181: INFO: deployment "test-rolling-update-deployment" doesn't have the required revision set +Oct 27 15:21:35.206: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected +Oct 27 15:21:35.218: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944893, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944893, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944893, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944893, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-585b757574\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 27 15:21:37.231: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 +Oct 27 15:21:37.266: INFO: Deployment "test-rolling-update-deployment": +&Deployment{ObjectMeta:{test-rolling-update-deployment deployment-5510 ec9a8333-c114-4fc3-b39a-c3ac2402535d 41229 1 2021-10-27 15:21:33 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2021-10-27 15:21:33 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 15:21:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005716a78 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-10-27 15:21:33 +0000 UTC,LastTransitionTime:2021-10-27 15:21:33 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-585b757574" has successfully progressed.,LastUpdateTime:2021-10-27 15:21:35 +0000 UTC,LastTransitionTime:2021-10-27 15:21:33 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} + +Oct 27 15:21:37.278: INFO: New ReplicaSet "test-rolling-update-deployment-585b757574" of Deployment "test-rolling-update-deployment": +&ReplicaSet{ObjectMeta:{test-rolling-update-deployment-585b757574 deployment-5510 c80efce9-cab5-44b4-b5d3-9e60dd5863f1 41222 1 2021-10-27 15:21:33 +0000 UTC map[name:sample-pod pod-template-hash:585b757574] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment ec9a8333-c114-4fc3-b39a-c3ac2402535d 0xc0057175b7 0xc0057175b8}] [] [{kube-controller-manager Update apps/v1 2021-10-27 15:21:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ec9a8333-c114-4fc3-b39a-c3ac2402535d\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 15:21:35 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 585b757574,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:585b757574] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0057176c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} +Oct 27 15:21:37.278: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": +Oct 27 15:21:37.278: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-5510 17ec21a1-7b31-45ce-8aef-e865dbaa4980 41228 2 2021-10-27 15:21:29 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment ec9a8333-c114-4fc3-b39a-c3ac2402535d 0xc005717357 0xc005717358}] [] [{e2e.test Update apps/v1 2021-10-27 15:21:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 15:21:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ec9a8333-c114-4fc3-b39a-c3ac2402535d\"}":{}}},"f:spec":{"f:replicas":{}}} } {kube-controller-manager Update apps/v1 2021-10-27 15:21:35 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0057174c8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Oct 27 15:21:37.290: INFO: Pod "test-rolling-update-deployment-585b757574-nxpqv" is available: +&Pod{ObjectMeta:{test-rolling-update-deployment-585b757574-nxpqv test-rolling-update-deployment-585b757574- deployment-5510 9ab1e861-1766-48a1-bbf6-f153e860e205 41221 0 2021-10-27 15:21:33 +0000 UTC map[name:sample-pod pod-template-hash:585b757574] map[cni.projectcalico.org/containerID:e8fec3f833c4d7213a02e26fbba6ffc7315d938dc16e5ebf06a50de3f241bb0d cni.projectcalico.org/podIP:100.96.1.36/32 cni.projectcalico.org/podIPs:100.96.1.36/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet test-rolling-update-deployment-585b757574 c80efce9-cab5-44b4-b5d3-9e60dd5863f1 0xc005717d97 0xc005717d98}] [] [{calico Update v1 2021-10-27 15:21:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kube-controller-manager Update v1 2021-10-27 15:21:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c80efce9-cab5-44b4-b5d3-9e60dd5863f1\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:21:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.1.36\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-2fzxd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmgxs-skc.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2fzxd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:21:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:21:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:21:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:21:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.4,PodIP:100.96.1.36,StartTime:2021-10-27 15:21:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-27 15:21:35 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:containerd://d29a5160ef277fb9d478f8bc1216bd263976559b393a6b75898426c9d31bf6f2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.1.36,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:21:37.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-5510" for this suite. +•{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":346,"completed":246,"skipped":4232,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:21:37.325: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-9987 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name configmap-test-volume-ae48e375-e69f-4d88-ab00-48761137601c +STEP: Creating a pod to test consume configMaps +Oct 27 15:21:37.545: INFO: Waiting up to 5m0s for pod "pod-configmaps-4600802e-b2bd-4f51-89df-a428d01b84c7" in namespace "configmap-9987" to be "Succeeded or Failed" +Oct 27 15:21:37.556: INFO: Pod "pod-configmaps-4600802e-b2bd-4f51-89df-a428d01b84c7": Phase="Pending", Reason="", readiness=false. Elapsed: 11.374946ms +Oct 27 15:21:39.569: INFO: Pod "pod-configmaps-4600802e-b2bd-4f51-89df-a428d01b84c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024428408s +Oct 27 15:21:41.583: INFO: Pod "pod-configmaps-4600802e-b2bd-4f51-89df-a428d01b84c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037932604s +STEP: Saw pod success +Oct 27 15:21:41.583: INFO: Pod "pod-configmaps-4600802e-b2bd-4f51-89df-a428d01b84c7" satisfied condition "Succeeded or Failed" +Oct 27 15:21:41.595: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod pod-configmaps-4600802e-b2bd-4f51-89df-a428d01b84c7 container configmap-volume-test: +STEP: delete the pod +Oct 27 15:21:41.654: INFO: Waiting for pod pod-configmaps-4600802e-b2bd-4f51-89df-a428d01b84c7 to disappear +Oct 27 15:21:41.665: INFO: Pod pod-configmaps-4600802e-b2bd-4f51-89df-a428d01b84c7 no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:21:41.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-9987" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":346,"completed":247,"skipped":4272,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide container's memory request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:21:41.698: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-2443 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should provide container's memory request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 27 15:21:41.904: INFO: Waiting up to 5m0s for pod "downwardapi-volume-42832ab6-4e4a-43e0-8f7e-3b45fb0f7097" in namespace "projected-2443" to be "Succeeded or Failed" +Oct 27 15:21:41.915: INFO: Pod "downwardapi-volume-42832ab6-4e4a-43e0-8f7e-3b45fb0f7097": Phase="Pending", Reason="", readiness=false. Elapsed: 11.152621ms +Oct 27 15:21:43.928: INFO: Pod "downwardapi-volume-42832ab6-4e4a-43e0-8f7e-3b45fb0f7097": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024373093s +Oct 27 15:21:45.940: INFO: Pod "downwardapi-volume-42832ab6-4e4a-43e0-8f7e-3b45fb0f7097": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036509316s +STEP: Saw pod success +Oct 27 15:21:45.940: INFO: Pod "downwardapi-volume-42832ab6-4e4a-43e0-8f7e-3b45fb0f7097" satisfied condition "Succeeded or Failed" +Oct 27 15:21:45.951: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod downwardapi-volume-42832ab6-4e4a-43e0-8f7e-3b45fb0f7097 container client-container: +STEP: delete the pod +Oct 27 15:21:46.023: INFO: Waiting for pod downwardapi-volume-42832ab6-4e4a-43e0-8f7e-3b45fb0f7097 to disappear +Oct 27 15:21:46.034: INFO: Pod downwardapi-volume-42832ab6-4e4a-43e0-8f7e-3b45fb0f7097 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:21:46.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-2443" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":346,"completed":248,"skipped":4283,"failed":0} +SSSSSSS +------------------------------ +[sig-apps] Deployment + deployment should delete old replica sets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:21:46.073: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename deployment +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in deployment-6469 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 +[It] deployment should delete old replica sets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:21:46.286: INFO: Pod name cleanup-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running +Oct 27 15:21:50.317: INFO: Creating deployment test-cleanup-deployment +STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 +Oct 27 15:21:54.422: INFO: Deployment "test-cleanup-deployment": +&Deployment{ObjectMeta:{test-cleanup-deployment deployment-6469 178d6137-220a-492a-bbad-42debdb355f8 41433 1 2021-10-27 15:21:50 +0000 UTC map[name:cleanup-pod] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2021-10-27 15:21:50 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 15:21:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00592fb98 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-10-27 15:21:50 +0000 UTC,LastTransitionTime:2021-10-27 15:21:50 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-cleanup-deployment-5b4d99b59b" has successfully progressed.,LastUpdateTime:2021-10-27 15:21:52 +0000 UTC,LastTransitionTime:2021-10-27 15:21:50 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} + +Oct 27 15:21:54.435: INFO: New ReplicaSet "test-cleanup-deployment-5b4d99b59b" of Deployment "test-cleanup-deployment": +&ReplicaSet{ObjectMeta:{test-cleanup-deployment-5b4d99b59b deployment-6469 b763b30b-37c7-471f-8ee7-58c8731f7a6c 41426 1 2021-10-27 15:21:50 +0000 UTC map[name:cleanup-pod pod-template-hash:5b4d99b59b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 178d6137-220a-492a-bbad-42debdb355f8 0xc0059563d7 0xc0059563d8}] [] [{kube-controller-manager Update apps/v1 2021-10-27 15:21:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"178d6137-220a-492a-bbad-42debdb355f8\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 15:21:52 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 5b4d99b59b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:5b4d99b59b] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005956508 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} +Oct 27 15:21:54.446: INFO: Pod "test-cleanup-deployment-5b4d99b59b-8cxc2" is available: +&Pod{ObjectMeta:{test-cleanup-deployment-5b4d99b59b-8cxc2 test-cleanup-deployment-5b4d99b59b- deployment-6469 95cc716e-3783-46ac-b919-1264167ff885 41425 0 2021-10-27 15:21:50 +0000 UTC map[name:cleanup-pod pod-template-hash:5b4d99b59b] map[cni.projectcalico.org/containerID:a41a4d10297c72a9011ef9e6d8fcf3a33f4959d251aa26151b4f96c3f99d0eba cni.projectcalico.org/podIP:100.96.1.40/32 cni.projectcalico.org/podIPs:100.96.1.40/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet test-cleanup-deployment-5b4d99b59b b763b30b-37c7-471f-8ee7-58c8731f7a6c 0xc0058f68c7 0xc0058f68c8}] [] [{calico Update v1 2021-10-27 15:21:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kube-controller-manager Update v1 2021-10-27 15:21:50 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b763b30b-37c7-471f-8ee7-58c8731f7a6c\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:21:52 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.1.40\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-vg9xv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmgxs-skc.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vg9xv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:21:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:21:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:21:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:21:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.4,PodIP:100.96.1.40,StartTime:2021-10-27 15:21:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-27 15:21:51 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:containerd://36a8fb97198e0a2f4716736e47faff64c57f50ed76bbfacf15ccc3064a8b2146,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.1.40,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:21:54.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-6469" for this suite. +•{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":346,"completed":249,"skipped":4290,"failed":0} +SSSSSSSS +------------------------------ +[sig-network] EndpointSliceMirroring + should mirror a custom Endpoints resource through create update and delete [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] EndpointSliceMirroring + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:21:54.485: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename endpointslicemirroring +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in endpointslicemirroring-1115 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] EndpointSliceMirroring + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslicemirroring.go:39 +[It] should mirror a custom Endpoints resource through create update and delete [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: mirroring a new custom Endpoint +Oct 27 15:21:54.712: INFO: Waiting for at least 1 EndpointSlice to exist, got 0 +STEP: mirroring an update to a custom Endpoint +STEP: mirroring deletion of a custom Endpoint +[AfterEach] [sig-network] EndpointSliceMirroring + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:21:56.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "endpointslicemirroring-1115" for this suite. +•{"msg":"PASSED [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","total":346,"completed":250,"skipped":4298,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-node] Lease + lease API should be available [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Lease + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:21:56.805: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename lease-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in lease-test-4658 +STEP: Waiting for a default service account to be provisioned in namespace +[It] lease API should be available [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-node] Lease + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:21:57.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "lease-test-4658" for this suite. +•{"msg":"PASSED [sig-node] Lease lease API should be available [Conformance]","total":346,"completed":251,"skipped":4312,"failed":0} +SSSSSSSS +------------------------------ +[sig-node] Docker Containers + should be able to override the image's default command and arguments [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Docker Containers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:21:57.333: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename containers +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in containers-3296 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test override all +Oct 27 15:21:57.536: INFO: Waiting up to 5m0s for pod "client-containers-12218ce4-2667-46ea-aa8a-e13fa023e317" in namespace "containers-3296" to be "Succeeded or Failed" +Oct 27 15:21:57.548: INFO: Pod "client-containers-12218ce4-2667-46ea-aa8a-e13fa023e317": Phase="Pending", Reason="", readiness=false. Elapsed: 11.27386ms +Oct 27 15:21:59.561: INFO: Pod "client-containers-12218ce4-2667-46ea-aa8a-e13fa023e317": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02440951s +Oct 27 15:22:01.574: INFO: Pod "client-containers-12218ce4-2667-46ea-aa8a-e13fa023e317": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037727155s +STEP: Saw pod success +Oct 27 15:22:01.574: INFO: Pod "client-containers-12218ce4-2667-46ea-aa8a-e13fa023e317" satisfied condition "Succeeded or Failed" +Oct 27 15:22:01.585: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod client-containers-12218ce4-2667-46ea-aa8a-e13fa023e317 container agnhost-container: +STEP: delete the pod +Oct 27 15:22:01.675: INFO: Waiting for pod client-containers-12218ce4-2667-46ea-aa8a-e13fa023e317 to disappear +Oct 27 15:22:01.686: INFO: Pod client-containers-12218ce4-2667-46ea-aa8a-e13fa023e317 no longer exists +[AfterEach] [sig-node] Docker Containers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:22:01.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "containers-3296" for this suite. +•{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":346,"completed":252,"skipped":4320,"failed":0} +S +------------------------------ +[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook + should execute poststart http hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Container Lifecycle Hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:22:01.721: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-lifecycle-hook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-lifecycle-hook-665 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] when create a pod with lifecycle hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 +STEP: create the container to handle the HTTPGet hook request. +Oct 27 15:22:01.934: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:22:03.946: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:22:05.947: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) +[It] should execute poststart http hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the pod with lifecycle hook +Oct 27 15:22:05.986: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:22:08.000: INFO: The status of Pod pod-with-poststart-http-hook is Running (Ready = true) +STEP: check poststart hook +STEP: delete the pod with lifecycle hook +Oct 27 15:22:08.090: INFO: Waiting for pod pod-with-poststart-http-hook to disappear +Oct 27 15:22:08.106: INFO: Pod pod-with-poststart-http-hook still exists +Oct 27 15:22:10.106: INFO: Waiting for pod pod-with-poststart-http-hook to disappear +Oct 27 15:22:10.119: INFO: Pod pod-with-poststart-http-hook still exists +Oct 27 15:22:12.108: INFO: Waiting for pod pod-with-poststart-http-hook to disappear +Oct 27 15:22:12.119: INFO: Pod pod-with-poststart-http-hook no longer exists +[AfterEach] [sig-node] Container Lifecycle Hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:22:12.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-lifecycle-hook-665" for this suite. +•{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":346,"completed":253,"skipped":4321,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:22:12.212: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-717 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 27 15:22:12.416: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e96a3dd3-1989-4b22-8b5d-1f262ae0006c" in namespace "projected-717" to be "Succeeded or Failed" +Oct 27 15:22:12.427: INFO: Pod "downwardapi-volume-e96a3dd3-1989-4b22-8b5d-1f262ae0006c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.786483ms +Oct 27 15:22:14.439: INFO: Pod "downwardapi-volume-e96a3dd3-1989-4b22-8b5d-1f262ae0006c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02288335s +Oct 27 15:22:16.453: INFO: Pod "downwardapi-volume-e96a3dd3-1989-4b22-8b5d-1f262ae0006c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036671883s +STEP: Saw pod success +Oct 27 15:22:16.453: INFO: Pod "downwardapi-volume-e96a3dd3-1989-4b22-8b5d-1f262ae0006c" satisfied condition "Succeeded or Failed" +Oct 27 15:22:16.464: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod downwardapi-volume-e96a3dd3-1989-4b22-8b5d-1f262ae0006c container client-container: +STEP: delete the pod +Oct 27 15:22:16.582: INFO: Waiting for pod downwardapi-volume-e96a3dd3-1989-4b22-8b5d-1f262ae0006c to disappear +Oct 27 15:22:16.593: INFO: Pod downwardapi-volume-e96a3dd3-1989-4b22-8b5d-1f262ae0006c no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:22:16.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-717" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":346,"completed":254,"skipped":4352,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Pods Extended Pods Set QOS Class + should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Pods Extended + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:22:16.628: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename pods +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-5395 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] Pods Set QOS Class + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:149 +[It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pod +STEP: submitting the pod to kubernetes +STEP: verifying QOS class is set on the pod +[AfterEach] [sig-node] Pods Extended + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:22:16.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-5395" for this suite. +•{"msg":"PASSED [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":346,"completed":255,"skipped":4381,"failed":0} +SSSSSSSSS +------------------------------ +[sig-scheduling] SchedulerPredicates [Serial] + validates resource limits of pods that are allowed to run [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:22:16.869: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename sched-pred +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-pred-8136 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 +Oct 27 15:22:17.049: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready +Oct 27 15:22:17.074: INFO: Waiting for terminating namespaces to be deleted... +Oct 27 15:22:17.085: INFO: +Logging pods the apiserver thinks is on node shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8 before test +Oct 27 15:22:17.106: INFO: addons-nginx-ingress-controller-76f55b7b5f-ffxv8 from kube-system started at 2021-10-27 14:09:38 +0000 UTC (1 container statuses recorded) +Oct 27 15:22:17.106: INFO: Container nginx-ingress-controller ready: true, restart count 0 +Oct 27 15:22:17.106: INFO: addons-nginx-ingress-nginx-ingress-k8s-backend-56d9d84c8c-w2blg from kube-system started at 2021-10-27 13:56:51 +0000 UTC (1 container statuses recorded) +Oct 27 15:22:17.106: INFO: Container nginx-ingress-nginx-ingress-k8s-backend ready: true, restart count 0 +Oct 27 15:22:17.106: INFO: apiserver-proxy-vdnm2 from kube-system started at 2021-10-27 13:56:14 +0000 UTC (2 container statuses recorded) +Oct 27 15:22:17.106: INFO: Container proxy ready: true, restart count 0 +Oct 27 15:22:17.106: INFO: Container sidecar ready: true, restart count 0 +Oct 27 15:22:17.106: INFO: calico-node-bmkxt from kube-system started at 2021-10-27 14:03:54 +0000 UTC (1 container statuses recorded) +Oct 27 15:22:17.106: INFO: Container calico-node ready: true, restart count 0 +Oct 27 15:22:17.106: INFO: calico-node-vertical-autoscaler-785b5f968-sbxt6 from kube-system started at 2021-10-27 13:56:51 +0000 UTC (1 container statuses recorded) +Oct 27 15:22:17.106: INFO: Container autoscaler ready: true, restart count 0 +Oct 27 15:22:17.106: INFO: calico-typha-deploy-546b97d4b5-kw64w from kube-system started at 2021-10-27 13:56:14 +0000 UTC (1 container statuses recorded) +Oct 27 15:22:17.106: INFO: Container calico-typha ready: true, restart count 0 +Oct 27 15:22:17.106: INFO: calico-typha-horizontal-autoscaler-5b58bb446c-p96rk from kube-system started at 2021-10-27 13:56:51 +0000 UTC (1 container statuses recorded) +Oct 27 15:22:17.106: INFO: Container autoscaler ready: true, restart count 0 +Oct 27 15:22:17.106: INFO: calico-typha-vertical-autoscaler-5c9655cddd-z7tgn from kube-system started at 2021-10-27 13:56:51 +0000 UTC (1 container statuses recorded) +Oct 27 15:22:17.106: INFO: Container autoscaler ready: true, restart count 0 +Oct 27 15:22:17.106: INFO: coredns-7649bdf444-cnjp5 from kube-system started at 2021-10-27 13:56:51 +0000 UTC (1 container statuses recorded) +Oct 27 15:22:17.106: INFO: Container coredns ready: true, restart count 0 +Oct 27 15:22:17.106: INFO: coredns-7649bdf444-x6nkv from kube-system started at 2021-10-27 13:56:51 +0000 UTC (1 container statuses recorded) +Oct 27 15:22:17.106: INFO: Container coredns ready: true, restart count 0 +Oct 27 15:22:17.106: INFO: csi-driver-node-disk-tb5lc from kube-system started at 2021-10-27 13:56:14 +0000 UTC (3 container statuses recorded) +Oct 27 15:22:17.106: INFO: Container csi-driver ready: true, restart count 0 +Oct 27 15:22:17.106: INFO: Container csi-liveness-probe ready: true, restart count 0 +Oct 27 15:22:17.106: INFO: Container csi-node-driver-registrar ready: true, restart count 0 +Oct 27 15:22:17.106: INFO: csi-driver-node-file-8vk78 from kube-system started at 2021-10-27 13:56:14 +0000 UTC (3 container statuses recorded) +Oct 27 15:22:17.106: INFO: Container csi-driver ready: true, restart count 0 +Oct 27 15:22:17.106: INFO: Container csi-liveness-probe ready: true, restart count 0 +Oct 27 15:22:17.106: INFO: Container csi-node-driver-registrar ready: true, restart count 0 +Oct 27 15:22:17.106: INFO: kube-proxy-7d5xq from kube-system started at 2021-10-27 14:56:47 +0000 UTC (2 container statuses recorded) +Oct 27 15:22:17.106: INFO: Container conntrack-fix ready: true, restart count 0 +Oct 27 15:22:17.106: INFO: Container kube-proxy ready: true, restart count 0 +Oct 27 15:22:17.106: INFO: metrics-server-5555d7587-mw896 from kube-system started at 2021-10-27 13:56:51 +0000 UTC (1 container statuses recorded) +Oct 27 15:22:17.106: INFO: Container metrics-server ready: true, restart count 0 +Oct 27 15:22:17.106: INFO: node-exporter-fg8qw from kube-system started at 2021-10-27 13:56:14 +0000 UTC (1 container statuses recorded) +Oct 27 15:22:17.106: INFO: Container node-exporter ready: true, restart count 0 +Oct 27 15:22:17.106: INFO: node-problem-detector-bxt7r from kube-system started at 2021-10-27 14:07:47 +0000 UTC (1 container statuses recorded) +Oct 27 15:22:17.106: INFO: Container node-problem-detector ready: true, restart count 0 +Oct 27 15:22:17.106: INFO: vpn-shoot-7f6446d489-9kghs from kube-system started at 2021-10-27 13:56:51 +0000 UTC (1 container statuses recorded) +Oct 27 15:22:17.106: INFO: Container vpn-shoot ready: true, restart count 0 +Oct 27 15:22:17.106: INFO: dashboard-metrics-scraper-7ccbfc448f-jcrjk from kubernetes-dashboard started at 2021-10-27 13:56:51 +0000 UTC (1 container statuses recorded) +Oct 27 15:22:17.106: INFO: Container dashboard-metrics-scraper ready: true, restart count 0 +Oct 27 15:22:17.106: INFO: kubernetes-dashboard-65d5f5c55-sf9qc from kubernetes-dashboard started at 2021-10-27 13:56:51 +0000 UTC (1 container statuses recorded) +Oct 27 15:22:17.106: INFO: Container kubernetes-dashboard ready: true, restart count 2 +Oct 27 15:22:17.106: INFO: +Logging pods the apiserver thinks is on node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 before test +Oct 27 15:22:17.132: INFO: pod-handle-http-request from container-lifecycle-hook-665 started at 2021-10-27 15:22:01 +0000 UTC (1 container statuses recorded) +Oct 27 15:22:17.132: INFO: Container agnhost-container ready: true, restart count 0 +Oct 27 15:22:17.132: INFO: apiserver-proxy-8bg6p from kube-system started at 2021-10-27 13:56:32 +0000 UTC (2 container statuses recorded) +Oct 27 15:22:17.132: INFO: Container proxy ready: true, restart count 0 +Oct 27 15:22:17.132: INFO: Container sidecar ready: true, restart count 0 +Oct 27 15:22:17.132: INFO: blackbox-exporter-65c549b94c-vc8rp from kube-system started at 2021-10-27 14:08:45 +0000 UTC (1 container statuses recorded) +Oct 27 15:22:17.132: INFO: Container blackbox-exporter ready: true, restart count 0 +Oct 27 15:22:17.132: INFO: calico-node-v56vf from kube-system started at 2021-10-27 14:03:54 +0000 UTC (1 container statuses recorded) +Oct 27 15:22:17.132: INFO: Container calico-node ready: true, restart count 0 +Oct 27 15:22:17.132: INFO: csi-driver-node-disk-h74nf from kube-system started at 2021-10-27 13:56:32 +0000 UTC (3 container statuses recorded) +Oct 27 15:22:17.132: INFO: Container csi-driver ready: true, restart count 0 +Oct 27 15:22:17.132: INFO: Container csi-liveness-probe ready: true, restart count 0 +Oct 27 15:22:17.132: INFO: Container csi-node-driver-registrar ready: true, restart count 0 +Oct 27 15:22:17.132: INFO: csi-driver-node-file-q9zq2 from kube-system started at 2021-10-27 13:56:32 +0000 UTC (3 container statuses recorded) +Oct 27 15:22:17.132: INFO: Container csi-driver ready: true, restart count 0 +Oct 27 15:22:17.132: INFO: Container csi-liveness-probe ready: true, restart count 0 +Oct 27 15:22:17.132: INFO: Container csi-node-driver-registrar ready: true, restart count 0 +Oct 27 15:22:17.132: INFO: kube-proxy-mlg7s from kube-system started at 2021-10-27 14:56:47 +0000 UTC (2 container statuses recorded) +Oct 27 15:22:17.132: INFO: Container conntrack-fix ready: true, restart count 0 +Oct 27 15:22:17.132: INFO: Container kube-proxy ready: true, restart count 0 +Oct 27 15:22:17.132: INFO: node-exporter-fs6fl from kube-system started at 2021-10-27 13:56:32 +0000 UTC (1 container statuses recorded) +Oct 27 15:22:17.132: INFO: Container node-exporter ready: true, restart count 0 +Oct 27 15:22:17.132: INFO: node-problem-detector-srvcj from kube-system started at 2021-10-27 14:07:47 +0000 UTC (1 container statuses recorded) +Oct 27 15:22:17.132: INFO: Container node-problem-detector ready: true, restart count 0 +Oct 27 15:22:17.132: INFO: pod-qos-class-44a38524-a963-419b-bc49-0fcadd7fb68e from pods-5395 started at 2021-10-27 15:22:16 +0000 UTC (1 container statuses recorded) +Oct 27 15:22:17.132: INFO: Container agnhost ready: false, restart count 0 +[It] validates resource limits of pods that are allowed to run [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: verifying the node has the label node shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8 +STEP: verifying the node has the label node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 +Oct 27 15:22:17.228: INFO: Pod pod-handle-http-request requesting resource cpu=0m on Node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 +Oct 27 15:22:17.228: INFO: Pod addons-nginx-ingress-controller-76f55b7b5f-ffxv8 requesting resource cpu=100m on Node shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8 +Oct 27 15:22:17.228: INFO: Pod addons-nginx-ingress-nginx-ingress-k8s-backend-56d9d84c8c-w2blg requesting resource cpu=0m on Node shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8 +Oct 27 15:22:17.228: INFO: Pod apiserver-proxy-8bg6p requesting resource cpu=40m on Node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 +Oct 27 15:22:17.228: INFO: Pod apiserver-proxy-vdnm2 requesting resource cpu=40m on Node shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8 +Oct 27 15:22:17.228: INFO: Pod blackbox-exporter-65c549b94c-vc8rp requesting resource cpu=11m on Node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 +Oct 27 15:22:17.228: INFO: Pod calico-node-bmkxt requesting resource cpu=250m on Node shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8 +Oct 27 15:22:17.228: INFO: Pod calico-node-v56vf requesting resource cpu=250m on Node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 +Oct 27 15:22:17.228: INFO: Pod calico-node-vertical-autoscaler-785b5f968-sbxt6 requesting resource cpu=10m on Node shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8 +Oct 27 15:22:17.228: INFO: Pod calico-typha-deploy-546b97d4b5-kw64w requesting resource cpu=200m on Node shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8 +Oct 27 15:22:17.228: INFO: Pod calico-typha-horizontal-autoscaler-5b58bb446c-p96rk requesting resource cpu=10m on Node shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8 +Oct 27 15:22:17.228: INFO: Pod calico-typha-vertical-autoscaler-5c9655cddd-z7tgn requesting resource cpu=10m on Node shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8 +Oct 27 15:22:17.228: INFO: Pod coredns-7649bdf444-cnjp5 requesting resource cpu=50m on Node shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8 +Oct 27 15:22:17.228: INFO: Pod coredns-7649bdf444-x6nkv requesting resource cpu=50m on Node shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8 +Oct 27 15:22:17.228: INFO: Pod csi-driver-node-disk-h74nf requesting resource cpu=40m on Node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 +Oct 27 15:22:17.228: INFO: Pod csi-driver-node-disk-tb5lc requesting resource cpu=40m on Node shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8 +Oct 27 15:22:17.228: INFO: Pod csi-driver-node-file-8vk78 requesting resource cpu=40m on Node shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8 +Oct 27 15:22:17.228: INFO: Pod csi-driver-node-file-q9zq2 requesting resource cpu=40m on Node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 +Oct 27 15:22:17.228: INFO: Pod kube-proxy-7d5xq requesting resource cpu=34m on Node shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8 +Oct 27 15:22:17.228: INFO: Pod kube-proxy-mlg7s requesting resource cpu=34m on Node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 +Oct 27 15:22:17.228: INFO: Pod metrics-server-5555d7587-mw896 requesting resource cpu=50m on Node shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8 +Oct 27 15:22:17.228: INFO: Pod node-exporter-fg8qw requesting resource cpu=50m on Node shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8 +Oct 27 15:22:17.228: INFO: Pod node-exporter-fs6fl requesting resource cpu=50m on Node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 +Oct 27 15:22:17.228: INFO: Pod node-problem-detector-bxt7r requesting resource cpu=49m on Node shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8 +Oct 27 15:22:17.228: INFO: Pod node-problem-detector-srvcj requesting resource cpu=49m on Node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 +Oct 27 15:22:17.228: INFO: Pod vpn-shoot-7f6446d489-9kghs requesting resource cpu=100m on Node shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8 +Oct 27 15:22:17.228: INFO: Pod dashboard-metrics-scraper-7ccbfc448f-jcrjk requesting resource cpu=0m on Node shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8 +Oct 27 15:22:17.228: INFO: Pod kubernetes-dashboard-65d5f5c55-sf9qc requesting resource cpu=50m on Node shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8 +Oct 27 15:22:17.228: INFO: Pod pod-qos-class-44a38524-a963-419b-bc49-0fcadd7fb68e requesting resource cpu=100m on Node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 +STEP: Starting Pods to consume most of the cluster CPU. +Oct 27 15:22:17.228: INFO: Creating a pod which consumes cpu=550m on Node shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8 +Oct 27 15:22:17.246: INFO: Creating a pod which consumes cpu=914m on Node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 +STEP: Creating another pod that requires unavailable amount of CPU. +STEP: Considering event: +Type = [Normal], Name = [filler-pod-71c73424-ed30-4cd3-95a3-17fef2e9c197.16b1ec7765f2f0f3], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8136/filler-pod-71c73424-ed30-4cd3-95a3-17fef2e9c197 to shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-71c73424-ed30-4cd3-95a3-17fef2e9c197.16b1ec77b21ee08c], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.5" already present on machine] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-71c73424-ed30-4cd3-95a3-17fef2e9c197.16b1ec77d67eda3c], Reason = [Created], Message = [Created container filler-pod-71c73424-ed30-4cd3-95a3-17fef2e9c197] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-71c73424-ed30-4cd3-95a3-17fef2e9c197.16b1ec77dc69f67c], Reason = [Started], Message = [Started container filler-pod-71c73424-ed30-4cd3-95a3-17fef2e9c197] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-cf094662-3422-46ba-9191-9a2a73952bbf.16b1ec7764c13e53], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8136/filler-pod-cf094662-3422-46ba-9191-9a2a73952bbf to shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-cf094662-3422-46ba-9191-9a2a73952bbf.16b1ec7799bf5ebb], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.5" already present on machine] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-cf094662-3422-46ba-9191-9a2a73952bbf.16b1ec77b4f12410], Reason = [Created], Message = [Created container filler-pod-cf094662-3422-46ba-9191-9a2a73952bbf] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-cf094662-3422-46ba-9191-9a2a73952bbf.16b1ec77bad6e82c], Reason = [Started], Message = [Started container filler-pod-cf094662-3422-46ba-9191-9a2a73952bbf] +STEP: Considering event: +Type = [Warning], Name = [additional-pod.16b1ec785998a619], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.] +STEP: removing the label node off the node shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8 +STEP: verifying the node doesn't have the label node +STEP: removing the label node off the node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 +STEP: verifying the node doesn't have the label node +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:22:22.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-pred-8136" for this suite. +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 +•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":346,"completed":256,"skipped":4390,"failed":0} +SSS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with downward pod [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:22:22.507: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename subpath +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in subpath-9030 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] Atomic writer volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 +STEP: Setting up data +[It] should support subpaths with downward pod [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating pod pod-subpath-test-downwardapi-qnz8 +STEP: Creating a pod to test atomic-volume-subpath +Oct 27 15:22:22.740: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-qnz8" in namespace "subpath-9030" to be "Succeeded or Failed" +Oct 27 15:22:22.751: INFO: Pod "pod-subpath-test-downwardapi-qnz8": Phase="Pending", Reason="", readiness=false. Elapsed: 11.041656ms +Oct 27 15:22:24.764: INFO: Pod "pod-subpath-test-downwardapi-qnz8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024324854s +Oct 27 15:22:26.778: INFO: Pod "pod-subpath-test-downwardapi-qnz8": Phase="Running", Reason="", readiness=true. Elapsed: 4.03799142s +Oct 27 15:22:28.799: INFO: Pod "pod-subpath-test-downwardapi-qnz8": Phase="Running", Reason="", readiness=true. Elapsed: 6.059092474s +Oct 27 15:22:30.811: INFO: Pod "pod-subpath-test-downwardapi-qnz8": Phase="Running", Reason="", readiness=true. Elapsed: 8.071631262s +Oct 27 15:22:32.824: INFO: Pod "pod-subpath-test-downwardapi-qnz8": Phase="Running", Reason="", readiness=true. Elapsed: 10.084526962s +Oct 27 15:22:34.837: INFO: Pod "pod-subpath-test-downwardapi-qnz8": Phase="Running", Reason="", readiness=true. Elapsed: 12.097544011s +Oct 27 15:22:36.850: INFO: Pod "pod-subpath-test-downwardapi-qnz8": Phase="Running", Reason="", readiness=true. Elapsed: 14.10974715s +Oct 27 15:22:38.863: INFO: Pod "pod-subpath-test-downwardapi-qnz8": Phase="Running", Reason="", readiness=true. Elapsed: 16.123152263s +Oct 27 15:22:40.876: INFO: Pod "pod-subpath-test-downwardapi-qnz8": Phase="Running", Reason="", readiness=true. Elapsed: 18.135896596s +Oct 27 15:22:42.889: INFO: Pod "pod-subpath-test-downwardapi-qnz8": Phase="Running", Reason="", readiness=true. Elapsed: 20.149086078s +Oct 27 15:22:44.903: INFO: Pod "pod-subpath-test-downwardapi-qnz8": Phase="Running", Reason="", readiness=true. Elapsed: 22.162986244s +Oct 27 15:22:46.917: INFO: Pod "pod-subpath-test-downwardapi-qnz8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.177122267s +STEP: Saw pod success +Oct 27 15:22:46.917: INFO: Pod "pod-subpath-test-downwardapi-qnz8" satisfied condition "Succeeded or Failed" +Oct 27 15:22:46.933: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod pod-subpath-test-downwardapi-qnz8 container test-container-subpath-downwardapi-qnz8: +STEP: delete the pod +Oct 27 15:22:47.002: INFO: Waiting for pod pod-subpath-test-downwardapi-qnz8 to disappear +Oct 27 15:22:47.013: INFO: Pod pod-subpath-test-downwardapi-qnz8 no longer exists +STEP: Deleting pod pod-subpath-test-downwardapi-qnz8 +Oct 27 15:22:47.013: INFO: Deleting pod "pod-subpath-test-downwardapi-qnz8" in namespace "subpath-9030" +[AfterEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:22:47.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "subpath-9030" for this suite. +•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":346,"completed":257,"skipped":4393,"failed":0} +S +------------------------------ +[sig-node] Sysctls [LinuxOnly] [NodeConformance] + should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:36 +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:22:47.058: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename sysctl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sysctl-1919 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:65 +[It] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod with one valid and two invalid sysctls +[AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:22:47.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sysctl-1919" for this suite. +•{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":346,"completed":258,"skipped":4394,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + listing mutating webhooks should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:22:47.284: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-8548 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 27 15:22:48.016: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944967, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944967, loc:(*time.Location)(0xa09bc80)}}, Reason:"NewReplicaSetCreated", Message:"Created new replica set \"sample-webhook-deployment-78988fc6cd\""}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944967, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944967, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} +Oct 27 15:22:50.029: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944967, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944967, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944968, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944967, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 27 15:22:53.051: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] listing mutating webhooks should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Listing all of the created validation webhooks +STEP: Creating a configMap that should be mutated +STEP: Deleting the collection of validation webhooks +STEP: Creating a configMap that should not be mutated +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:22:54.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-8548" for this suite. +STEP: Destroying namespace "webhook-8548-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":346,"completed":259,"skipped":4422,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Container Runtime blackbox test on terminated container + should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:22:54.231: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-runtime +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-7910 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the container +STEP: wait for the container to reach Succeeded +STEP: get the container status +STEP: the container should be terminated +STEP: the termination message should be set +Oct 27 15:22:57.495: INFO: Expected: &{OK} to match Container's Termination Message: OK -- +STEP: delete the container +[AfterEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:22:57.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-runtime-7910" for this suite. +•{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":346,"completed":260,"skipped":4464,"failed":0} +SS +------------------------------ +[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook + should execute prestop exec hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Container Lifecycle Hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:22:57.558: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-lifecycle-hook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-lifecycle-hook-4641 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] when create a pod with lifecycle hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 +STEP: create the container to handle the HTTPGet hook request. +Oct 27 15:22:57.777: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:22:59.790: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:23:01.791: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) +[It] should execute prestop exec hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the pod with lifecycle hook +Oct 27 15:23:01.830: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:23:03.843: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:23:06.011: INFO: The status of Pod pod-with-prestop-exec-hook is Running (Ready = true) +STEP: delete the pod with lifecycle hook +Oct 27 15:23:06.035: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Oct 27 15:23:06.047: INFO: Pod pod-with-prestop-exec-hook still exists +Oct 27 15:23:08.049: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Oct 27 15:23:08.061: INFO: Pod pod-with-prestop-exec-hook no longer exists +STEP: check prestop hook +[AfterEach] [sig-node] Container Lifecycle Hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:23:08.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-lifecycle-hook-4641" for this suite. +•{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":346,"completed":261,"skipped":4466,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should update annotations on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:23:08.130: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-859 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should update annotations on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating the pod +Oct 27 15:23:08.342: INFO: The status of Pod annotationupdate2d4c0aca-efda-4c80-82de-d9f0eddd462d is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:23:10.355: INFO: The status of Pod annotationupdate2d4c0aca-efda-4c80-82de-d9f0eddd462d is Running (Ready = true) +Oct 27 15:23:10.983: INFO: Successfully updated pod "annotationupdate2d4c0aca-efda-4c80-82de-d9f0eddd462d" +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:23:13.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-859" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":346,"completed":262,"skipped":4503,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:23:13.113: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-202 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating service in namespace services-202 +STEP: creating service affinity-nodeport-transition in namespace services-202 +STEP: creating replication controller affinity-nodeport-transition in namespace services-202 +I1027 15:23:13.328031 5768 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-202, replica count: 3 +I1027 15:23:16.379655 5768 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Oct 27 15:23:16.423: INFO: Creating new exec pod +Oct 27 15:23:19.485: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-202 exec execpod-affinityttptz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' +Oct 27 15:23:20.023: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" +Oct 27 15:23:20.023: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 15:23:20.023: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-202 exec execpod-affinityttptz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.64.226.190 80' +Oct 27 15:23:20.549: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 100.64.226.190 80\nConnection to 100.64.226.190 80 port [tcp/http] succeeded!\n" +Oct 27 15:23:20.550: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 15:23:20.550: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-202 exec execpod-affinityttptz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.250.0.5 32094' +Oct 27 15:23:21.065: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.250.0.5 32094\nConnection to 10.250.0.5 32094 port [tcp/*] succeeded!\n" +Oct 27 15:23:21.065: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 15:23:21.065: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-202 exec execpod-affinityttptz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.250.0.4 32094' +Oct 27 15:23:21.615: INFO: stderr: "+ nc -v -t -w 2 10.250.0.4 32094\nConnection to 10.250.0.4 32094 port [tcp/*] succeeded!\n+ echo hostName\n" +Oct 27 15:23:21.615: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 15:23:21.642: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-202 exec execpod-affinityttptz -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.250.0.5:32094/ ; done' +Oct 27 15:23:22.278: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:32094/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:32094/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:32094/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:32094/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:32094/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:32094/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:32094/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:32094/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:32094/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:32094/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:32094/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:32094/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:32094/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:32094/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:32094/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:32094/\n" +Oct 27 15:23:22.278: INFO: stdout: "\naffinity-nodeport-transition-k6klz\naffinity-nodeport-transition-k6klz\naffinity-nodeport-transition-k6klz\naffinity-nodeport-transition-k6klz\naffinity-nodeport-transition-k6klz\naffinity-nodeport-transition-k6klz\naffinity-nodeport-transition-k6klz\naffinity-nodeport-transition-k6klz\naffinity-nodeport-transition-k6klz\naffinity-nodeport-transition-k6klz\naffinity-nodeport-transition-k6klz\naffinity-nodeport-transition-k6klz\naffinity-nodeport-transition-fpd6h\naffinity-nodeport-transition-cvbgz\naffinity-nodeport-transition-k6klz\naffinity-nodeport-transition-cvbgz" +Oct 27 15:23:22.278: INFO: Received response from host: affinity-nodeport-transition-k6klz +Oct 27 15:23:22.278: INFO: Received response from host: affinity-nodeport-transition-k6klz +Oct 27 15:23:22.278: INFO: Received response from host: affinity-nodeport-transition-k6klz +Oct 27 15:23:22.278: INFO: Received response from host: affinity-nodeport-transition-k6klz +Oct 27 15:23:22.278: INFO: Received response from host: affinity-nodeport-transition-k6klz +Oct 27 15:23:22.278: INFO: Received response from host: affinity-nodeport-transition-k6klz +Oct 27 15:23:22.278: INFO: Received response from host: affinity-nodeport-transition-k6klz +Oct 27 15:23:22.278: INFO: Received response from host: affinity-nodeport-transition-k6klz +Oct 27 15:23:22.278: INFO: Received response from host: affinity-nodeport-transition-k6klz +Oct 27 15:23:22.278: INFO: Received response from host: affinity-nodeport-transition-k6klz +Oct 27 15:23:22.278: INFO: Received response from host: affinity-nodeport-transition-k6klz +Oct 27 15:23:22.278: INFO: Received response from host: affinity-nodeport-transition-k6klz +Oct 27 15:23:22.278: INFO: Received response from host: affinity-nodeport-transition-fpd6h +Oct 27 15:23:22.278: INFO: Received response from host: affinity-nodeport-transition-cvbgz +Oct 27 15:23:22.278: INFO: Received response from host: affinity-nodeport-transition-k6klz +Oct 27 15:23:22.278: INFO: Received response from host: affinity-nodeport-transition-cvbgz +Oct 27 15:23:22.302: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-202 exec execpod-affinityttptz -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.250.0.5:32094/ ; done' +Oct 27 15:23:22.981: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:32094/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:32094/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:32094/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:32094/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:32094/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:32094/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:32094/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:32094/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:32094/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:32094/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:32094/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:32094/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:32094/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:32094/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:32094/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:32094/\n" +Oct 27 15:23:22.981: INFO: stdout: "\naffinity-nodeport-transition-cvbgz\naffinity-nodeport-transition-fpd6h\naffinity-nodeport-transition-cvbgz\naffinity-nodeport-transition-fpd6h\naffinity-nodeport-transition-fpd6h\naffinity-nodeport-transition-fpd6h\naffinity-nodeport-transition-k6klz\naffinity-nodeport-transition-cvbgz\naffinity-nodeport-transition-cvbgz\naffinity-nodeport-transition-cvbgz\naffinity-nodeport-transition-cvbgz\naffinity-nodeport-transition-cvbgz\naffinity-nodeport-transition-cvbgz\naffinity-nodeport-transition-cvbgz\naffinity-nodeport-transition-cvbgz\naffinity-nodeport-transition-cvbgz" +Oct 27 15:23:22.981: INFO: Received response from host: affinity-nodeport-transition-cvbgz +Oct 27 15:23:22.981: INFO: Received response from host: affinity-nodeport-transition-fpd6h +Oct 27 15:23:22.981: INFO: Received response from host: affinity-nodeport-transition-cvbgz +Oct 27 15:23:22.981: INFO: Received response from host: affinity-nodeport-transition-fpd6h +Oct 27 15:23:22.981: INFO: Received response from host: affinity-nodeport-transition-fpd6h +Oct 27 15:23:22.981: INFO: Received response from host: affinity-nodeport-transition-fpd6h +Oct 27 15:23:22.981: INFO: Received response from host: affinity-nodeport-transition-k6klz +Oct 27 15:23:22.981: INFO: Received response from host: affinity-nodeport-transition-cvbgz +Oct 27 15:23:22.981: INFO: Received response from host: affinity-nodeport-transition-cvbgz +Oct 27 15:23:22.981: INFO: Received response from host: affinity-nodeport-transition-cvbgz +Oct 27 15:23:22.982: INFO: Received response from host: affinity-nodeport-transition-cvbgz +Oct 27 15:23:22.982: INFO: Received response from host: affinity-nodeport-transition-cvbgz +Oct 27 15:23:22.982: INFO: Received response from host: affinity-nodeport-transition-cvbgz +Oct 27 15:23:22.982: INFO: Received response from host: affinity-nodeport-transition-cvbgz +Oct 27 15:23:22.982: INFO: Received response from host: affinity-nodeport-transition-cvbgz +Oct 27 15:23:22.982: INFO: Received response from host: affinity-nodeport-transition-cvbgz +Oct 27 15:23:52.982: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-202 exec execpod-affinityttptz -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.250.0.5:32094/ ; done' +Oct 27 15:23:53.683: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:32094/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:32094/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:32094/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:32094/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:32094/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:32094/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:32094/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:32094/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:32094/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:32094/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:32094/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:32094/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:32094/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:32094/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:32094/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:32094/\n" +Oct 27 15:23:53.683: INFO: stdout: "\naffinity-nodeport-transition-cvbgz\naffinity-nodeport-transition-cvbgz\naffinity-nodeport-transition-cvbgz\naffinity-nodeport-transition-cvbgz\naffinity-nodeport-transition-cvbgz\naffinity-nodeport-transition-cvbgz\naffinity-nodeport-transition-cvbgz\naffinity-nodeport-transition-cvbgz\naffinity-nodeport-transition-cvbgz\naffinity-nodeport-transition-cvbgz\naffinity-nodeport-transition-cvbgz\naffinity-nodeport-transition-cvbgz\naffinity-nodeport-transition-cvbgz\naffinity-nodeport-transition-cvbgz\naffinity-nodeport-transition-cvbgz\naffinity-nodeport-transition-cvbgz" +Oct 27 15:23:53.683: INFO: Received response from host: affinity-nodeport-transition-cvbgz +Oct 27 15:23:53.683: INFO: Received response from host: affinity-nodeport-transition-cvbgz +Oct 27 15:23:53.683: INFO: Received response from host: affinity-nodeport-transition-cvbgz +Oct 27 15:23:53.683: INFO: Received response from host: affinity-nodeport-transition-cvbgz +Oct 27 15:23:53.683: INFO: Received response from host: affinity-nodeport-transition-cvbgz +Oct 27 15:23:53.683: INFO: Received response from host: affinity-nodeport-transition-cvbgz +Oct 27 15:23:53.683: INFO: Received response from host: affinity-nodeport-transition-cvbgz +Oct 27 15:23:53.683: INFO: Received response from host: affinity-nodeport-transition-cvbgz +Oct 27 15:23:53.683: INFO: Received response from host: affinity-nodeport-transition-cvbgz +Oct 27 15:23:53.683: INFO: Received response from host: affinity-nodeport-transition-cvbgz +Oct 27 15:23:53.683: INFO: Received response from host: affinity-nodeport-transition-cvbgz +Oct 27 15:23:53.683: INFO: Received response from host: affinity-nodeport-transition-cvbgz +Oct 27 15:23:53.683: INFO: Received response from host: affinity-nodeport-transition-cvbgz +Oct 27 15:23:53.683: INFO: Received response from host: affinity-nodeport-transition-cvbgz +Oct 27 15:23:53.683: INFO: Received response from host: affinity-nodeport-transition-cvbgz +Oct 27 15:23:53.683: INFO: Received response from host: affinity-nodeport-transition-cvbgz +Oct 27 15:23:53.683: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-202, will wait for the garbage collector to delete the pods +Oct 27 15:23:53.776: INFO: Deleting ReplicationController affinity-nodeport-transition took: 12.695067ms +Oct 27 15:23:53.877: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 100.931322ms +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:23:56.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-202" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":346,"completed":263,"skipped":4533,"failed":0} +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:23:56.654: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-3082 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0666 on tmpfs +Oct 27 15:23:56.859: INFO: Waiting up to 5m0s for pod "pod-e78f565f-4a26-477b-b8e8-873ba0a76269" in namespace "emptydir-3082" to be "Succeeded or Failed" +Oct 27 15:23:56.871: INFO: Pod "pod-e78f565f-4a26-477b-b8e8-873ba0a76269": Phase="Pending", Reason="", readiness=false. Elapsed: 11.793607ms +Oct 27 15:23:58.883: INFO: Pod "pod-e78f565f-4a26-477b-b8e8-873ba0a76269": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023938374s +Oct 27 15:24:00.895: INFO: Pod "pod-e78f565f-4a26-477b-b8e8-873ba0a76269": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035903026s +STEP: Saw pod success +Oct 27 15:24:00.895: INFO: Pod "pod-e78f565f-4a26-477b-b8e8-873ba0a76269" satisfied condition "Succeeded or Failed" +Oct 27 15:24:00.907: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod pod-e78f565f-4a26-477b-b8e8-873ba0a76269 container test-container: +STEP: delete the pod +Oct 27 15:24:00.973: INFO: Waiting for pod pod-e78f565f-4a26-477b-b8e8-873ba0a76269 to disappear +Oct 27 15:24:00.985: INFO: Pod pod-e78f565f-4a26-477b-b8e8-873ba0a76269 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:24:00.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-3082" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":264,"skipped":4550,"failed":0} + +------------------------------ +[sig-node] Security Context When creating a container with runAsUser + should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:24:01.018: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename security-context-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-test-3254 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 +[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:24:01.224: INFO: Waiting up to 5m0s for pod "busybox-user-65534-bec7df60-da15-494b-a67f-372e35474979" in namespace "security-context-test-3254" to be "Succeeded or Failed" +Oct 27 15:24:01.236: INFO: Pod "busybox-user-65534-bec7df60-da15-494b-a67f-372e35474979": Phase="Pending", Reason="", readiness=false. Elapsed: 11.70226ms +Oct 27 15:24:03.252: INFO: Pod "busybox-user-65534-bec7df60-da15-494b-a67f-372e35474979": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028042658s +Oct 27 15:24:05.264: INFO: Pod "busybox-user-65534-bec7df60-da15-494b-a67f-372e35474979": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040160649s +Oct 27 15:24:05.265: INFO: Pod "busybox-user-65534-bec7df60-da15-494b-a67f-372e35474979" satisfied condition "Succeeded or Failed" +[AfterEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:24:05.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "security-context-test-3254" for this suite. +•{"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":265,"skipped":4550,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide container's cpu request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:24:05.299: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-528 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should provide container's cpu request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 27 15:24:05.691: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3dacddd0-10aa-408d-9168-04f5b9e6c8d8" in namespace "projected-528" to be "Succeeded or Failed" +Oct 27 15:24:05.704: INFO: Pod "downwardapi-volume-3dacddd0-10aa-408d-9168-04f5b9e6c8d8": Phase="Pending", Reason="", readiness=false. Elapsed: 13.006839ms +Oct 27 15:24:07.716: INFO: Pod "downwardapi-volume-3dacddd0-10aa-408d-9168-04f5b9e6c8d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025265404s +Oct 27 15:24:09.729: INFO: Pod "downwardapi-volume-3dacddd0-10aa-408d-9168-04f5b9e6c8d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038144616s +STEP: Saw pod success +Oct 27 15:24:09.729: INFO: Pod "downwardapi-volume-3dacddd0-10aa-408d-9168-04f5b9e6c8d8" satisfied condition "Succeeded or Failed" +Oct 27 15:24:09.741: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod downwardapi-volume-3dacddd0-10aa-408d-9168-04f5b9e6c8d8 container client-container: +STEP: delete the pod +Oct 27 15:24:09.860: INFO: Waiting for pod downwardapi-volume-3dacddd0-10aa-408d-9168-04f5b9e6c8d8 to disappear +Oct 27 15:24:09.879: INFO: Pod downwardapi-volume-3dacddd0-10aa-408d-9168-04f5b9e6c8d8 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:24:09.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-528" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":346,"completed":266,"skipped":4560,"failed":0} +SSSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:24:09.913: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename statefulset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-9945 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:92 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:107 +STEP: Creating service test in namespace statefulset-9945 +[It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating stateful set ss in namespace statefulset-9945 +STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9945 +Oct 27 15:24:10.133: INFO: Found 0 stateful pods, waiting for 1 +Oct 27 15:24:20.145: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod +Oct 27 15:24:20.157: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-9945 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Oct 27 15:24:20.691: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Oct 27 15:24:20.691: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Oct 27 15:24:20.691: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Oct 27 15:24:20.703: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true +Oct 27 15:24:30.717: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false +Oct 27 15:24:30.717: INFO: Waiting for statefulset status.replicas updated to 0 +Oct 27 15:24:30.773: INFO: POD NODE PHASE GRACE CONDITIONS +Oct 27 15:24:30.773: INFO: ss-0 shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 15:24:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-27 15:24:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-27 15:24:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 15:24:10 +0000 UTC }] +Oct 27 15:24:30.773: INFO: ss-1 Pending [] +Oct 27 15:24:30.773: INFO: +Oct 27 15:24:30.773: INFO: StatefulSet ss has not reached scale 3, at 2 +Oct 27 15:24:31.786: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.988551069s +Oct 27 15:24:32.799: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.975079984s +Oct 27 15:24:33.812: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.962390288s +Oct 27 15:24:34.836: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.949259536s +Oct 27 15:24:35.850: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.924709038s +Oct 27 15:24:36.863: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.910873864s +Oct 27 15:24:37.875: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.898570158s +Oct 27 15:24:38.889: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.885255096s +Oct 27 15:24:39.903: INFO: Verifying statefulset ss doesn't scale past 3 for another 870.185689ms +STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9945 +Oct 27 15:24:40.918: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-9945 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:24:41.766: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Oct 27 15:24:41.766: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Oct 27 15:24:41.766: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Oct 27 15:24:41.766: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-9945 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:24:42.269: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" +Oct 27 15:24:42.269: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Oct 27 15:24:42.269: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Oct 27 15:24:42.269: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-9945 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:24:42.798: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" +Oct 27 15:24:42.798: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Oct 27 15:24:42.798: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Oct 27 15:24:42.811: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +Oct 27 15:24:42.811: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true +Oct 27 15:24:42.811: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true +STEP: Scale down will not halt with unhealthy stateful pod +Oct 27 15:24:42.823: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-9945 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Oct 27 15:24:43.334: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Oct 27 15:24:43.334: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Oct 27 15:24:43.334: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Oct 27 15:24:43.334: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-9945 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Oct 27 15:24:43.826: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Oct 27 15:24:43.826: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Oct 27 15:24:43.826: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Oct 27 15:24:43.826: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-9945 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Oct 27 15:24:44.386: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Oct 27 15:24:44.386: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Oct 27 15:24:44.386: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Oct 27 15:24:44.386: INFO: Waiting for statefulset status.replicas updated to 0 +Oct 27 15:24:44.398: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 +Oct 27 15:24:54.423: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false +Oct 27 15:24:54.423: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false +Oct 27 15:24:54.423: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false +Oct 27 15:24:54.465: INFO: POD NODE PHASE GRACE CONDITIONS +Oct 27 15:24:54.465: INFO: ss-0 shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 15:24:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-27 15:24:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-27 15:24:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 15:24:10 +0000 UTC }] +Oct 27 15:24:54.465: INFO: ss-1 shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 15:24:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-27 15:24:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-27 15:24:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 15:24:30 +0000 UTC }] +Oct 27 15:24:54.465: INFO: ss-2 shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 15:24:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-27 15:24:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-27 15:24:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 15:24:30 +0000 UTC }] +Oct 27 15:24:54.465: INFO: +Oct 27 15:24:54.465: INFO: StatefulSet ss has not reached scale 0, at 3 +Oct 27 15:24:55.479: INFO: POD NODE PHASE GRACE CONDITIONS +Oct 27 15:24:55.479: INFO: ss-0 shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 15:24:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-27 15:24:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-27 15:24:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 15:24:10 +0000 UTC }] +Oct 27 15:24:55.479: INFO: ss-1 shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 15:24:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-27 15:24:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-27 15:24:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 15:24:30 +0000 UTC }] +Oct 27 15:24:55.479: INFO: ss-2 shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 15:24:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-27 15:24:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-27 15:24:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 15:24:30 +0000 UTC }] +Oct 27 15:24:55.479: INFO: +Oct 27 15:24:55.479: INFO: StatefulSet ss has not reached scale 0, at 3 +Oct 27 15:24:56.492: INFO: POD NODE PHASE GRACE CONDITIONS +Oct 27 15:24:56.492: INFO: ss-0 shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 15:24:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-27 15:24:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-27 15:24:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 15:24:10 +0000 UTC }] +Oct 27 15:24:56.492: INFO: ss-2 shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 15:24:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-27 15:24:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-27 15:24:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 15:24:30 +0000 UTC }] +Oct 27 15:24:56.492: INFO: +Oct 27 15:24:56.492: INFO: StatefulSet ss has not reached scale 0, at 2 +Oct 27 15:24:57.504: INFO: Verifying statefulset ss doesn't scale past 0 for another 6.956900248s +Oct 27 15:24:58.515: INFO: Verifying statefulset ss doesn't scale past 0 for another 5.945250044s +Oct 27 15:24:59.528: INFO: Verifying statefulset ss doesn't scale past 0 for another 4.933408557s +Oct 27 15:25:00.540: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.920786044s +Oct 27 15:25:01.552: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.908453218s +Oct 27 15:25:02.566: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.895564958s +Oct 27 15:25:03.578: INFO: Verifying statefulset ss doesn't scale past 0 for another 883.095451ms +STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9945 +Oct 27 15:25:04.590: INFO: Scaling statefulset ss to 0 +Oct 27 15:25:04.625: INFO: Waiting for statefulset status.replicas updated to 0 +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118 +Oct 27 15:25:04.636: INFO: Deleting all statefulset in ns statefulset-9945 +Oct 27 15:25:04.647: INFO: Scaling statefulset ss to 0 +Oct 27 15:25:04.681: INFO: Waiting for statefulset status.replicas updated to 0 +Oct 27 15:25:04.692: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:25:04.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-9945" for this suite. +•{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":346,"completed":267,"skipped":4565,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook + should execute prestop http hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Container Lifecycle Hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:25:04.761: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-lifecycle-hook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-lifecycle-hook-9013 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] when create a pod with lifecycle hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 +STEP: create the container to handle the HTTPGet hook request. +Oct 27 15:25:04.980: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:25:06.995: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) +[It] should execute prestop http hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the pod with lifecycle hook +Oct 27 15:25:07.034: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:25:09.047: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:25:11.048: INFO: The status of Pod pod-with-prestop-http-hook is Running (Ready = true) +STEP: delete the pod with lifecycle hook +Oct 27 15:25:11.073: INFO: Waiting for pod pod-with-prestop-http-hook to disappear +Oct 27 15:25:11.085: INFO: Pod pod-with-prestop-http-hook still exists +Oct 27 15:25:13.086: INFO: Waiting for pod pod-with-prestop-http-hook to disappear +Oct 27 15:25:13.100: INFO: Pod pod-with-prestop-http-hook no longer exists +STEP: check prestop hook +[AfterEach] [sig-node] Container Lifecycle Hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:25:13.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-lifecycle-hook-9013" for this suite. +•{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":346,"completed":268,"skipped":4581,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:25:13.231: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-2420 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name configmap-test-volume-88c528fc-b213-4518-a642-6749c6f03e1b +STEP: Creating a pod to test consume configMaps +Oct 27 15:25:13.444: INFO: Waiting up to 5m0s for pod "pod-configmaps-a8e7b025-ecad-4756-8971-6e8ddc76ef40" in namespace "configmap-2420" to be "Succeeded or Failed" +Oct 27 15:25:13.459: INFO: Pod "pod-configmaps-a8e7b025-ecad-4756-8971-6e8ddc76ef40": Phase="Pending", Reason="", readiness=false. Elapsed: 14.871494ms +Oct 27 15:25:15.471: INFO: Pod "pod-configmaps-a8e7b025-ecad-4756-8971-6e8ddc76ef40": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027024725s +Oct 27 15:25:17.484: INFO: Pod "pod-configmaps-a8e7b025-ecad-4756-8971-6e8ddc76ef40": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040381749s +STEP: Saw pod success +Oct 27 15:25:17.484: INFO: Pod "pod-configmaps-a8e7b025-ecad-4756-8971-6e8ddc76ef40" satisfied condition "Succeeded or Failed" +Oct 27 15:25:17.496: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod pod-configmaps-a8e7b025-ecad-4756-8971-6e8ddc76ef40 container agnhost-container: +STEP: delete the pod +Oct 27 15:25:17.561: INFO: Waiting for pod pod-configmaps-a8e7b025-ecad-4756-8971-6e8ddc76ef40 to disappear +Oct 27 15:25:17.572: INFO: Pod pod-configmaps-a8e7b025-ecad-4756-8971-6e8ddc76ef40 no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:25:17.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-2420" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":346,"completed":269,"skipped":4617,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-apps] CronJob + should not schedule jobs when suspended [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:25:17.607: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename cronjob +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in cronjob-407 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not schedule jobs when suspended [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a suspended cronjob +STEP: Ensuring no jobs are scheduled +STEP: Ensuring no job exists by listing jobs explicitly +STEP: Removing cronjob +[AfterEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:30:17.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "cronjob-407" for this suite. + +• [SLOW TEST:300.291 seconds] +[sig-apps] CronJob +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should not schedule jobs when suspended [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","total":346,"completed":270,"skipped":4629,"failed":0} +[sig-node] Docker Containers + should use the image defaults if command and args are blank [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Docker Containers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:30:17.898: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename containers +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in containers-2455 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-node] Docker Containers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:30:22.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "containers-2455" for this suite. +•{"msg":"PASSED [sig-node] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":346,"completed":271,"skipped":4629,"failed":0} +SSSS +------------------------------ +[sig-network] DNS + should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:30:22.273: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename dns +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-5149 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a test headless service +STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5149 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5149;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5149 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5149;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5149.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5149.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5149.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5149.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5149.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-5149.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5149.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-5149.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5149.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-5149.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5149.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-5149.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5149.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 220.171.70.100.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/100.70.171.220_udp@PTR;check="$$(dig +tcp +noall +answer +search 220.171.70.100.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/100.70.171.220_tcp@PTR;sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5149 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5149;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5149 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5149;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5149.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5149.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5149.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5149.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5149.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-5149.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5149.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-5149.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5149.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-5149.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5149.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-5149.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5149.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 220.171.70.100.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/100.70.171.220_udp@PTR;check="$$(dig +tcp +noall +answer +search 220.171.70.100.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/100.70.171.220_tcp@PTR;sleep 1; done + +STEP: creating a pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Oct 27 15:30:26.708: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:26.754: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:26.803: INFO: Unable to read wheezy_udp@dns-test-service.dns-5149 from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:26.860: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5149 from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:26.890: INFO: Unable to read wheezy_udp@dns-test-service.dns-5149.svc from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:26.921: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5149.svc from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:26.952: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5149.svc from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:26.982: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5149.svc from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:27.194: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:27.223: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:27.252: INFO: Unable to read jessie_udp@dns-test-service.dns-5149 from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:27.281: INFO: Unable to read jessie_tcp@dns-test-service.dns-5149 from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:27.335: INFO: Unable to read jessie_udp@dns-test-service.dns-5149.svc from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:27.394: INFO: Unable to read jessie_tcp@dns-test-service.dns-5149.svc from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:27.440: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5149.svc from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:27.470: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5149.svc from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:27.804: INFO: Lookups using dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5149 wheezy_tcp@dns-test-service.dns-5149 wheezy_udp@dns-test-service.dns-5149.svc wheezy_tcp@dns-test-service.dns-5149.svc wheezy_udp@_http._tcp.dns-test-service.dns-5149.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5149.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5149 jessie_tcp@dns-test-service.dns-5149 jessie_udp@dns-test-service.dns-5149.svc jessie_tcp@dns-test-service.dns-5149.svc jessie_udp@_http._tcp.dns-test-service.dns-5149.svc jessie_tcp@_http._tcp.dns-test-service.dns-5149.svc] + +Oct 27 15:30:32.900: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:32.966: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:32.996: INFO: Unable to read wheezy_udp@dns-test-service.dns-5149 from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:33.025: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5149 from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:33.055: INFO: Unable to read wheezy_udp@dns-test-service.dns-5149.svc from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:33.085: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5149.svc from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:33.114: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5149.svc from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:33.144: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5149.svc from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:33.354: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:33.383: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:33.413: INFO: Unable to read jessie_udp@dns-test-service.dns-5149 from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:33.442: INFO: Unable to read jessie_tcp@dns-test-service.dns-5149 from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:33.471: INFO: Unable to read jessie_udp@dns-test-service.dns-5149.svc from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:33.501: INFO: Unable to read jessie_tcp@dns-test-service.dns-5149.svc from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:33.531: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5149.svc from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:33.560: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5149.svc from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:33.743: INFO: Lookups using dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5149 wheezy_tcp@dns-test-service.dns-5149 wheezy_udp@dns-test-service.dns-5149.svc wheezy_tcp@dns-test-service.dns-5149.svc wheezy_udp@_http._tcp.dns-test-service.dns-5149.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5149.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5149 jessie_tcp@dns-test-service.dns-5149 jessie_udp@dns-test-service.dns-5149.svc jessie_tcp@dns-test-service.dns-5149.svc jessie_udp@_http._tcp.dns-test-service.dns-5149.svc jessie_tcp@_http._tcp.dns-test-service.dns-5149.svc] + +Oct 27 15:30:37.836: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:37.896: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:37.926: INFO: Unable to read wheezy_udp@dns-test-service.dns-5149 from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:37.956: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5149 from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:37.986: INFO: Unable to read wheezy_udp@dns-test-service.dns-5149.svc from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:38.015: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5149.svc from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:38.045: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5149.svc from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:38.074: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5149.svc from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:38.317: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:38.346: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:38.376: INFO: Unable to read jessie_udp@dns-test-service.dns-5149 from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:38.423: INFO: Unable to read jessie_tcp@dns-test-service.dns-5149 from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:38.453: INFO: Unable to read jessie_udp@dns-test-service.dns-5149.svc from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:38.487: INFO: Unable to read jessie_tcp@dns-test-service.dns-5149.svc from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:38.517: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5149.svc from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:38.548: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5149.svc from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:38.731: INFO: Lookups using dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5149 wheezy_tcp@dns-test-service.dns-5149 wheezy_udp@dns-test-service.dns-5149.svc wheezy_tcp@dns-test-service.dns-5149.svc wheezy_udp@_http._tcp.dns-test-service.dns-5149.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5149.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5149 jessie_tcp@dns-test-service.dns-5149 jessie_udp@dns-test-service.dns-5149.svc jessie_tcp@dns-test-service.dns-5149.svc jessie_udp@_http._tcp.dns-test-service.dns-5149.svc jessie_tcp@_http._tcp.dns-test-service.dns-5149.svc] + +Oct 27 15:30:42.837: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:42.896: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:42.926: INFO: Unable to read wheezy_udp@dns-test-service.dns-5149 from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:42.958: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5149 from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:42.988: INFO: Unable to read wheezy_udp@dns-test-service.dns-5149.svc from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:43.018: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5149.svc from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:43.048: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5149.svc from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:43.079: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5149.svc from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:43.295: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:43.355: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:43.384: INFO: Unable to read jessie_udp@dns-test-service.dns-5149 from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:43.450: INFO: Unable to read jessie_tcp@dns-test-service.dns-5149 from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:43.479: INFO: Unable to read jessie_udp@dns-test-service.dns-5149.svc from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:43.511: INFO: Unable to read jessie_tcp@dns-test-service.dns-5149.svc from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:43.544: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5149.svc from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:43.581: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5149.svc from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:43.768: INFO: Lookups using dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5149 wheezy_tcp@dns-test-service.dns-5149 wheezy_udp@dns-test-service.dns-5149.svc wheezy_tcp@dns-test-service.dns-5149.svc wheezy_udp@_http._tcp.dns-test-service.dns-5149.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5149.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5149 jessie_tcp@dns-test-service.dns-5149 jessie_udp@dns-test-service.dns-5149.svc jessie_tcp@dns-test-service.dns-5149.svc jessie_udp@_http._tcp.dns-test-service.dns-5149.svc jessie_tcp@_http._tcp.dns-test-service.dns-5149.svc] + +Oct 27 15:30:47.835: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:47.871: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:47.917: INFO: Unable to read wheezy_udp@dns-test-service.dns-5149 from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:48.033: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5149 from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:48.062: INFO: Unable to read wheezy_udp@dns-test-service.dns-5149.svc from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:48.094: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5149.svc from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:48.123: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5149.svc from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:48.155: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5149.svc from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:48.367: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:48.396: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:48.425: INFO: Unable to read jessie_udp@dns-test-service.dns-5149 from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:48.454: INFO: Unable to read jessie_tcp@dns-test-service.dns-5149 from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:48.483: INFO: Unable to read jessie_udp@dns-test-service.dns-5149.svc from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:48.515: INFO: Unable to read jessie_tcp@dns-test-service.dns-5149.svc from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:48.545: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5149.svc from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:48.574: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5149.svc from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:48.763: INFO: Lookups using dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5149 wheezy_tcp@dns-test-service.dns-5149 wheezy_udp@dns-test-service.dns-5149.svc wheezy_tcp@dns-test-service.dns-5149.svc wheezy_udp@_http._tcp.dns-test-service.dns-5149.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5149.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5149 jessie_tcp@dns-test-service.dns-5149 jessie_udp@dns-test-service.dns-5149.svc jessie_tcp@dns-test-service.dns-5149.svc jessie_udp@_http._tcp.dns-test-service.dns-5149.svc jessie_tcp@_http._tcp.dns-test-service.dns-5149.svc] + +Oct 27 15:30:52.837: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:52.867: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:52.928: INFO: Unable to read wheezy_udp@dns-test-service.dns-5149 from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:52.958: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5149 from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:52.987: INFO: Unable to read wheezy_udp@dns-test-service.dns-5149.svc from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:53.015: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5149.svc from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:53.045: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5149.svc from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:53.073: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5149.svc from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:53.304: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:53.335: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:53.368: INFO: Unable to read jessie_udp@dns-test-service.dns-5149 from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:53.399: INFO: Unable to read jessie_tcp@dns-test-service.dns-5149 from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:53.428: INFO: Unable to read jessie_udp@dns-test-service.dns-5149.svc from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:53.457: INFO: Unable to read jessie_tcp@dns-test-service.dns-5149.svc from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:53.486: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5149.svc from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:53.533: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5149.svc from pod dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980: the server could not find the requested resource (get pods dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980) +Oct 27 15:30:53.763: INFO: Lookups using dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5149 wheezy_tcp@dns-test-service.dns-5149 wheezy_udp@dns-test-service.dns-5149.svc wheezy_tcp@dns-test-service.dns-5149.svc wheezy_udp@_http._tcp.dns-test-service.dns-5149.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5149.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5149 jessie_tcp@dns-test-service.dns-5149 jessie_udp@dns-test-service.dns-5149.svc jessie_tcp@dns-test-service.dns-5149.svc jessie_udp@_http._tcp.dns-test-service.dns-5149.svc jessie_tcp@_http._tcp.dns-test-service.dns-5149.svc] + +Oct 27 15:30:58.731: INFO: DNS probes using dns-5149/dns-test-22bebc1b-36bb-4e22-8c3b-88d699936980 succeeded + +STEP: deleting the pod +STEP: deleting the test service +STEP: deleting the test headless service +[AfterEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:30:58.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-5149" for this suite. +•{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":346,"completed":272,"skipped":4633,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-auth] ServiceAccounts + should mount projected service account token [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:30:58.822: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename svcaccounts +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in svcaccounts-7259 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should mount projected service account token [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test service account token: +Oct 27 15:30:59.063: INFO: Waiting up to 5m0s for pod "test-pod-6ed66bea-f4d6-421f-a988-af7f74da8014" in namespace "svcaccounts-7259" to be "Succeeded or Failed" +Oct 27 15:30:59.074: INFO: Pod "test-pod-6ed66bea-f4d6-421f-a988-af7f74da8014": Phase="Pending", Reason="", readiness=false. Elapsed: 10.85561ms +Oct 27 15:31:01.086: INFO: Pod "test-pod-6ed66bea-f4d6-421f-a988-af7f74da8014": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022659411s +Oct 27 15:31:03.099: INFO: Pod "test-pod-6ed66bea-f4d6-421f-a988-af7f74da8014": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036423766s +STEP: Saw pod success +Oct 27 15:31:03.100: INFO: Pod "test-pod-6ed66bea-f4d6-421f-a988-af7f74da8014" satisfied condition "Succeeded or Failed" +Oct 27 15:31:03.112: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod test-pod-6ed66bea-f4d6-421f-a988-af7f74da8014 container agnhost-container: +STEP: delete the pod +Oct 27 15:31:03.183: INFO: Waiting for pod test-pod-6ed66bea-f4d6-421f-a988-af7f74da8014 to disappear +Oct 27 15:31:03.194: INFO: Pod test-pod-6ed66bea-f4d6-421f-a988-af7f74da8014 no longer exists +[AfterEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:31:03.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svcaccounts-7259" for this suite. +•{"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":346,"completed":273,"skipped":4674,"failed":0} +SS +------------------------------ +[sig-network] Services + should test the lifecycle of an Endpoint [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:31:03.229: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-3723 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should test the lifecycle of an Endpoint [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating an Endpoint +STEP: waiting for available Endpoint +STEP: listing all Endpoints +STEP: updating the Endpoint +STEP: fetching the Endpoint +STEP: patching the Endpoint +STEP: fetching the Endpoint +STEP: deleting the Endpoint by Collection +STEP: waiting for Endpoint deletion +STEP: fetching the Endpoint +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:31:03.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-3723" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":346,"completed":274,"skipped":4676,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:31:03.591: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-2547 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating secret with name secret-test-ee2d8232-9b8e-4030-ae76-3aaf82b1c69a +STEP: Creating a pod to test consume secrets +Oct 27 15:31:03.810: INFO: Waiting up to 5m0s for pod "pod-secrets-8b114393-50ca-400a-a430-2ebdbb93ace0" in namespace "secrets-2547" to be "Succeeded or Failed" +Oct 27 15:31:03.823: INFO: Pod "pod-secrets-8b114393-50ca-400a-a430-2ebdbb93ace0": Phase="Pending", Reason="", readiness=false. Elapsed: 13.377809ms +Oct 27 15:31:05.838: INFO: Pod "pod-secrets-8b114393-50ca-400a-a430-2ebdbb93ace0": Phase="Running", Reason="", readiness=true. Elapsed: 2.027692606s +Oct 27 15:31:07.851: INFO: Pod "pod-secrets-8b114393-50ca-400a-a430-2ebdbb93ace0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040707878s +STEP: Saw pod success +Oct 27 15:31:07.851: INFO: Pod "pod-secrets-8b114393-50ca-400a-a430-2ebdbb93ace0" satisfied condition "Succeeded or Failed" +Oct 27 15:31:07.864: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod pod-secrets-8b114393-50ca-400a-a430-2ebdbb93ace0 container secret-volume-test: +STEP: delete the pod +Oct 27 15:31:07.929: INFO: Waiting for pod pod-secrets-8b114393-50ca-400a-a430-2ebdbb93ace0 to disappear +Oct 27 15:31:07.940: INFO: Pod pod-secrets-8b114393-50ca-400a-a430-2ebdbb93ace0 no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:31:07.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-2547" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":346,"completed":275,"skipped":4720,"failed":0} +SSSS +------------------------------ +[sig-cli] Kubectl client Kubectl label + should update the label on a resource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:31:07.975: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-1013 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[BeforeEach] Kubectl label + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1318 +STEP: creating the pod +Oct 27 15:31:08.163: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-1013 create -f -' +Oct 27 15:31:08.398: INFO: stderr: "" +Oct 27 15:31:08.398: INFO: stdout: "pod/pause created\n" +Oct 27 15:31:08.398: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] +Oct 27 15:31:08.398: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-1013" to be "running and ready" +Oct 27 15:31:08.411: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 12.839978ms +Oct 27 15:31:10.424: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025785511s +Oct 27 15:31:12.437: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.038424423s +Oct 27 15:31:12.437: INFO: Pod "pause" satisfied condition "running and ready" +Oct 27 15:31:12.437: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] +[It] should update the label on a resource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: adding the label testing-label with value testing-label-value to a pod +Oct 27 15:31:12.437: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-1013 label pods pause testing-label=testing-label-value' +Oct 27 15:31:12.550: INFO: stderr: "" +Oct 27 15:31:12.550: INFO: stdout: "pod/pause labeled\n" +STEP: verifying the pod has the label testing-label with the value testing-label-value +Oct 27 15:31:12.550: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-1013 get pod pause -L testing-label' +Oct 27 15:31:12.645: INFO: stderr: "" +Oct 27 15:31:12.645: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" +STEP: removing the label testing-label of a pod +Oct 27 15:31:12.646: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-1013 label pods pause testing-label-' +Oct 27 15:31:12.763: INFO: stderr: "" +Oct 27 15:31:12.763: INFO: stdout: "pod/pause labeled\n" +STEP: verifying the pod doesn't have the label testing-label +Oct 27 15:31:12.763: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-1013 get pod pause -L testing-label' +Oct 27 15:31:12.854: INFO: stderr: "" +Oct 27 15:31:12.854: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" +[AfterEach] Kubectl label + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1324 +STEP: using delete to clean up resources +Oct 27 15:31:12.855: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-1013 delete --grace-period=0 --force -f -' +Oct 27 15:31:12.965: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Oct 27 15:31:12.965: INFO: stdout: "pod \"pause\" force deleted\n" +Oct 27 15:31:12.965: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-1013 get rc,svc -l name=pause --no-headers' +Oct 27 15:31:13.093: INFO: stderr: "No resources found in kubectl-1013 namespace.\n" +Oct 27 15:31:13.093: INFO: stdout: "" +Oct 27 15:31:13.093: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-1013 get pods -l name=pause -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' +Oct 27 15:31:13.203: INFO: stderr: "" +Oct 27 15:31:13.203: INFO: stdout: "" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:31:13.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-1013" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":346,"completed":276,"skipped":4724,"failed":0} +SSS +------------------------------ +[sig-node] Downward API + should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:31:13.237: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-9484 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward api env vars +Oct 27 15:31:13.466: INFO: Waiting up to 5m0s for pod "downward-api-fb9b52b9-21b5-4e2b-bc5b-21d0491ee109" in namespace "downward-api-9484" to be "Succeeded or Failed" +Oct 27 15:31:13.478: INFO: Pod "downward-api-fb9b52b9-21b5-4e2b-bc5b-21d0491ee109": Phase="Pending", Reason="", readiness=false. Elapsed: 11.228532ms +Oct 27 15:31:15.491: INFO: Pod "downward-api-fb9b52b9-21b5-4e2b-bc5b-21d0491ee109": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024392688s +Oct 27 15:31:17.504: INFO: Pod "downward-api-fb9b52b9-21b5-4e2b-bc5b-21d0491ee109": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037384954s +STEP: Saw pod success +Oct 27 15:31:17.504: INFO: Pod "downward-api-fb9b52b9-21b5-4e2b-bc5b-21d0491ee109" satisfied condition "Succeeded or Failed" +Oct 27 15:31:17.515: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod downward-api-fb9b52b9-21b5-4e2b-bc5b-21d0491ee109 container dapi-container: +STEP: delete the pod +Oct 27 15:31:17.580: INFO: Waiting for pod downward-api-fb9b52b9-21b5-4e2b-bc5b-21d0491ee109 to disappear +Oct 27 15:31:17.592: INFO: Pod downward-api-fb9b52b9-21b5-4e2b-bc5b-21d0491ee109 no longer exists +[AfterEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:31:17.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-9484" for this suite. +•{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":346,"completed":277,"skipped":4727,"failed":0} +SSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide podname only [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:31:17.626: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-374 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should provide podname only [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 27 15:31:17.834: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0e8e0b03-e4fb-4122-a742-2abadefba249" in namespace "projected-374" to be "Succeeded or Failed" +Oct 27 15:31:17.849: INFO: Pod "downwardapi-volume-0e8e0b03-e4fb-4122-a742-2abadefba249": Phase="Pending", Reason="", readiness=false. Elapsed: 14.581779ms +Oct 27 15:31:19.867: INFO: Pod "downwardapi-volume-0e8e0b03-e4fb-4122-a742-2abadefba249": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.032055899s +STEP: Saw pod success +Oct 27 15:31:19.867: INFO: Pod "downwardapi-volume-0e8e0b03-e4fb-4122-a742-2abadefba249" satisfied condition "Succeeded or Failed" +Oct 27 15:31:19.878: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod downwardapi-volume-0e8e0b03-e4fb-4122-a742-2abadefba249 container client-container: +STEP: delete the pod +Oct 27 15:31:19.948: INFO: Waiting for pod downwardapi-volume-0e8e0b03-e4fb-4122-a742-2abadefba249 to disappear +Oct 27 15:31:19.959: INFO: Pod downwardapi-volume-0e8e0b03-e4fb-4122-a742-2abadefba249 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:31:19.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-374" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":346,"completed":278,"skipped":4732,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-scheduling] SchedulerPreemption [Serial] + validates lower priority pod preemption by critical pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:31:19.993: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename sched-preemption +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-preemption-9624 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 +Oct 27 15:31:20.218: INFO: Waiting up to 1m0s for all nodes to be ready +Oct 27 15:32:20.334: INFO: Waiting for terminating namespaces to be deleted... +[It] validates lower priority pod preemption by critical pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Create pods that use 4/5 of node resources. +Oct 27 15:32:20.391: INFO: Created pod: pod0-0-sched-preemption-low-priority +Oct 27 15:32:20.407: INFO: Created pod: pod0-1-sched-preemption-medium-priority +Oct 27 15:32:20.440: INFO: Created pod: pod1-0-sched-preemption-medium-priority +Oct 27 15:32:20.456: INFO: Created pod: pod1-1-sched-preemption-medium-priority +STEP: Wait for pods to be scheduled. +STEP: Run a critical pod that use same resources as that of a lower priority pod +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:32:38.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-preemption-9624" for this suite. +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 +•{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":346,"completed":279,"skipped":4748,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should honor timeout [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:32:38.751: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-6044 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 27 15:32:39.658: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +Oct 27 15:32:41.693: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770945559, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770945559, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770945559, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770945559, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 27 15:32:44.728: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should honor timeout [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Setting timeout (1s) shorter than webhook latency (5s) +STEP: Registering slow webhook via the AdmissionRegistration API +STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) +STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore +STEP: Registering slow webhook via the AdmissionRegistration API +STEP: Having no error when timeout is longer than webhook latency +STEP: Registering slow webhook via the AdmissionRegistration API +STEP: Having no error when timeout is empty (defaulted to 10s in v1) +STEP: Registering slow webhook via the AdmissionRegistration API +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:32:57.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-6044" for this suite. +STEP: Destroying namespace "webhook-6044-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":346,"completed":280,"skipped":4761,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:32:57.455: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-2970 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name configmap-test-volume-map-82f5ccbc-e2f7-4470-b638-2393a0a012cb +STEP: Creating a pod to test consume configMaps +Oct 27 15:32:57.673: INFO: Waiting up to 5m0s for pod "pod-configmaps-7470908a-a22d-4cdf-84d8-ec6eb113b7f3" in namespace "configmap-2970" to be "Succeeded or Failed" +Oct 27 15:32:57.684: INFO: Pod "pod-configmaps-7470908a-a22d-4cdf-84d8-ec6eb113b7f3": Phase="Pending", Reason="", readiness=false. Elapsed: 11.004858ms +Oct 27 15:32:59.697: INFO: Pod "pod-configmaps-7470908a-a22d-4cdf-84d8-ec6eb113b7f3": Phase="Running", Reason="", readiness=true. Elapsed: 2.023809721s +Oct 27 15:33:01.712: INFO: Pod "pod-configmaps-7470908a-a22d-4cdf-84d8-ec6eb113b7f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039027323s +STEP: Saw pod success +Oct 27 15:33:01.712: INFO: Pod "pod-configmaps-7470908a-a22d-4cdf-84d8-ec6eb113b7f3" satisfied condition "Succeeded or Failed" +Oct 27 15:33:01.724: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod pod-configmaps-7470908a-a22d-4cdf-84d8-ec6eb113b7f3 container agnhost-container: +STEP: delete the pod +Oct 27 15:33:01.832: INFO: Waiting for pod pod-configmaps-7470908a-a22d-4cdf-84d8-ec6eb113b7f3 to disappear +Oct 27 15:33:01.843: INFO: Pod pod-configmaps-7470908a-a22d-4cdf-84d8-ec6eb113b7f3 no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:33:01.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-2970" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":281,"skipped":4817,"failed":0} + +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + should perform rolling updates and roll backs of template modifications [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:33:01.876: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename statefulset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-4194 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:92 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:107 +STEP: Creating service test in namespace statefulset-4194 +[It] should perform rolling updates and roll backs of template modifications [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a new StatefulSet +Oct 27 15:33:02.095: INFO: Found 0 stateful pods, waiting for 3 +Oct 27 15:33:12.118: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true +Oct 27 15:33:12.118: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true +Oct 27 15:33:12.118: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true +Oct 27 15:33:12.152: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-4194 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Oct 27 15:33:12.729: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Oct 27 15:33:12.729: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Oct 27 15:33:12.729: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +STEP: Updating StatefulSet template: update image from k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 to k8s.gcr.io/e2e-test-images/httpd:2.4.39-1 +Oct 27 15:33:22.815: INFO: Updating stateful set ss2 +STEP: Creating a new revision +STEP: Updating Pods in reverse ordinal order +Oct 27 15:33:32.877: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-4194 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:33:33.406: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Oct 27 15:33:33.406: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Oct 27 15:33:33.406: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Oct 27 15:33:43.480: INFO: Waiting for StatefulSet statefulset-4194/ss2 to complete update +Oct 27 15:33:43.480: INFO: Waiting for Pod statefulset-4194/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 +Oct 27 15:33:43.480: INFO: Waiting for Pod statefulset-4194/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 +Oct 27 15:33:53.508: INFO: Waiting for StatefulSet statefulset-4194/ss2 to complete update +Oct 27 15:33:53.508: INFO: Waiting for Pod statefulset-4194/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 +STEP: Rolling back to a previous revision +Oct 27 15:34:03.505: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-4194 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Oct 27 15:34:03.990: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Oct 27 15:34:03.990: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Oct 27 15:34:03.990: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Oct 27 15:34:14.086: INFO: Updating stateful set ss2 +STEP: Rolling back update in reverse ordinal order +Oct 27 15:34:14.124: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-4194 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:34:14.713: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Oct 27 15:34:14.713: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Oct 27 15:34:14.713: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Oct 27 15:34:24.875: INFO: Waiting for StatefulSet statefulset-4194/ss2 to complete update +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118 +Oct 27 15:34:34.900: INFO: Deleting all statefulset in ns statefulset-4194 +Oct 27 15:34:34.913: INFO: Scaling statefulset ss2 to 0 +Oct 27 15:34:44.969: INFO: Waiting for statefulset status.replicas updated to 0 +Oct 27 15:34:44.981: INFO: Deleting statefulset ss2 +[AfterEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:34:45.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-4194" for this suite. +•{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":346,"completed":282,"skipped":4817,"failed":0} +SSSSSSSS +------------------------------ +[sig-api-machinery] Servers with support for Table transformation + should return a 406 for a backend which does not implement metadata [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Servers with support for Table transformation + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:34:45.082: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename tables +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in tables-1042 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] Servers with support for Table transformation + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 +[It] should return a 406 for a backend which does not implement metadata [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-api-machinery] Servers with support for Table transformation + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:34:45.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "tables-1042" for this suite. +•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":346,"completed":283,"skipped":4825,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a service. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:34:45.316: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename resourcequota +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-3337 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should create a ResourceQuota and capture the life of a service. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Counting existing ResourceQuota +STEP: Creating a ResourceQuota +STEP: Ensuring resource quota status is calculated +STEP: Creating a Service +STEP: Creating a NodePort Service +STEP: Not allowing a LoadBalancer Service with NodePort to be created that exceeds remaining quota +STEP: Ensuring resource quota status captures service creation +STEP: Deleting Services +STEP: Ensuring resource quota status released usage +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:34:56.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-3337" for this suite. +•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":346,"completed":284,"skipped":4850,"failed":0} +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] DisruptionController + should block an eviction until the PDB is updated to allow it [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:34:56.739: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename disruption +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in disruption-3690 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 +[It] should block an eviction until the PDB is updated to allow it [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pdb that targets all three pods in a test replica set +STEP: Waiting for the pdb to be processed +STEP: First trying to evict a pod which shouldn't be evictable +STEP: Waiting for all pods to be running +Oct 27 15:34:58.984: INFO: pods: 1 < 3 +Oct 27 15:35:01.009: INFO: running pods: 0 < 3 +STEP: locating a running pod +STEP: Updating the pdb to allow a pod to be evicted +STEP: Waiting for the pdb to be processed +STEP: Trying to evict the same pod we tried earlier which should now be evictable +STEP: Waiting for all pods to be running +STEP: Waiting for the pdb to observed all healthy pods +STEP: Patching the pdb to disallow a pod to be evicted +STEP: Waiting for the pdb to be processed +STEP: Waiting for all pods to be running +Oct 27 15:35:03.150: INFO: running pods: 2 < 3 +STEP: locating a running pod +STEP: Deleting the pdb to allow a pod to be evicted +STEP: Waiting for the pdb to be deleted +STEP: Trying to evict the same pod we tried earlier which should now be evictable +STEP: Waiting for all pods to be running +[AfterEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:35:05.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "disruption-3690" for this suite. +•{"msg":"PASSED [sig-apps] DisruptionController should block an eviction until the PDB is updated to allow it [Conformance]","total":346,"completed":285,"skipped":4871,"failed":0} +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Kubelet when scheduling a busybox command in a pod + should print the output to logs [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:35:05.277: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubelet-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubelet-test-1207 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 +[It] should print the output to logs [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:35:05.491: INFO: The status of Pod busybox-scheduling-846e7ba1-60e2-44d5-bc1e-33f1f0cd6443 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:35:07.503: INFO: The status of Pod busybox-scheduling-846e7ba1-60e2-44d5-bc1e-33f1f0cd6443 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:35:09.503: INFO: The status of Pod busybox-scheduling-846e7ba1-60e2-44d5-bc1e-33f1f0cd6443 is Running (Ready = true) +[AfterEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:35:09.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubelet-test-1207" for this suite. +•{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":346,"completed":286,"skipped":4888,"failed":0} + +------------------------------ +[sig-cli] Kubectl client Kubectl run pod + should create a pod from an image when restart is Never [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:35:09.594: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-9936 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[BeforeEach] Kubectl run pod + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1524 +[It] should create a pod from an image when restart is Never [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 +Oct 27 15:35:09.778: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9936 run e2e-test-httpd-pod --restart=Never --pod-running-timeout=2m0s --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-1' +Oct 27 15:35:10.172: INFO: stderr: "" +Oct 27 15:35:10.172: INFO: stdout: "pod/e2e-test-httpd-pod created\n" +STEP: verifying the pod e2e-test-httpd-pod was created +[AfterEach] Kubectl run pod + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1528 +Oct 27 15:35:10.184: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9936 delete pods e2e-test-httpd-pod' +Oct 27 15:35:12.762: INFO: stderr: "" +Oct 27 15:35:12.762: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:35:12.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-9936" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":346,"completed":287,"skipped":4888,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Kubelet when scheduling a busybox Pod with hostAliases + should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:35:12.798: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubelet-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubelet-test-9330 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 +[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:35:13.015: INFO: The status of Pod busybox-host-aliasese3c6826f-afa1-465b-9b26-2dd7a3555087 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:35:15.027: INFO: The status of Pod busybox-host-aliasese3c6826f-afa1-465b-9b26-2dd7a3555087 is Running (Ready = true) +[AfterEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:35:15.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubelet-test-9330" for this suite. +•{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":288,"skipped":4925,"failed":0} + +------------------------------ +[sig-node] ConfigMap + should fail to create ConfigMap with empty key [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:35:15.169: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-644 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should fail to create ConfigMap with empty key [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap that has name configmap-test-emptyKey-5aa86070-8eec-481b-a6db-aab8c2fad4b1 +[AfterEach] [sig-node] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:35:15.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-644" for this suite. +•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":346,"completed":289,"skipped":4925,"failed":0} +SSSSSSSS +------------------------------ +[sig-network] HostPort + validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] HostPort + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:35:15.389: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename hostport +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in hostport-7982 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] HostPort + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/hostport.go:47 +[It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Trying to create a pod(pod1) with hostport 54323 and hostIP 127.0.0.1 and expect scheduled +Oct 27 15:35:15.622: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:35:17.636: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:35:19.635: INFO: The status of Pod pod1 is Running (Ready = true) +STEP: Trying to create another pod(pod2) with hostport 54323 but hostIP 10.250.0.4 on the node which pod1 resides and expect scheduled +Oct 27 15:35:19.667: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:35:21.679: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:35:23.681: INFO: The status of Pod pod2 is Running (Ready = true) +STEP: Trying to create a third pod(pod3) with hostport 54323, hostIP 10.250.0.4 but use UDP protocol on the node which pod2 resides +Oct 27 15:35:23.709: INFO: The status of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:35:25.722: INFO: The status of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:35:27.723: INFO: The status of Pod pod3 is Running (Ready = true) +Oct 27 15:35:27.752: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:35:29.764: INFO: The status of Pod e2e-host-exec is Running (Ready = true) +STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54323 +Oct 27 15:35:29.775: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 10.250.0.4 http://127.0.0.1:54323/hostname] Namespace:hostport-7982 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 15:35:29.775: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: checking connectivity from pod e2e-host-exec to serverIP: 10.250.0.4, port: 54323 +Oct 27 15:35:30.205: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://10.250.0.4:54323/hostname] Namespace:hostport-7982 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 15:35:30.205: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: checking connectivity from pod e2e-host-exec to serverIP: 10.250.0.4, port: 54323 UDP +Oct 27 15:35:30.584: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 10.250.0.4 54323] Namespace:hostport-7982 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 15:35:30.584: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +[AfterEach] [sig-network] HostPort + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:35:35.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "hostport-7982" for this suite. +•{"msg":"PASSED [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]","total":346,"completed":290,"skipped":4933,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for CRD with validation schema [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:35:36.003: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-6329 +STEP: Waiting for a default service account to be provisioned in namespace +[It] works for CRD with validation schema [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:35:36.192: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: client-side validation (kubectl create and apply) allows request with known and required properties +Oct 27 15:35:40.424: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-6329 --namespace=crd-publish-openapi-6329 create -f -' +Oct 27 15:35:40.971: INFO: stderr: "" +Oct 27 15:35:40.971: INFO: stdout: "e2e-test-crd-publish-openapi-522-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" +Oct 27 15:35:40.971: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-6329 --namespace=crd-publish-openapi-6329 delete e2e-test-crd-publish-openapi-522-crds test-foo' +Oct 27 15:35:41.076: INFO: stderr: "" +Oct 27 15:35:41.076: INFO: stdout: "e2e-test-crd-publish-openapi-522-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" +Oct 27 15:35:41.076: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-6329 --namespace=crd-publish-openapi-6329 apply -f -' +Oct 27 15:35:41.299: INFO: stderr: "" +Oct 27 15:35:41.299: INFO: stdout: "e2e-test-crd-publish-openapi-522-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" +Oct 27 15:35:41.299: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-6329 --namespace=crd-publish-openapi-6329 delete e2e-test-crd-publish-openapi-522-crds test-foo' +Oct 27 15:35:41.402: INFO: stderr: "" +Oct 27 15:35:41.402: INFO: stdout: "e2e-test-crd-publish-openapi-522-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" +STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema +Oct 27 15:35:41.402: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-6329 --namespace=crd-publish-openapi-6329 create -f -' +Oct 27 15:35:41.589: INFO: rc: 1 +Oct 27 15:35:41.589: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-6329 --namespace=crd-publish-openapi-6329 apply -f -' +Oct 27 15:35:41.767: INFO: rc: 1 +STEP: client-side validation (kubectl create and apply) rejects request without required properties +Oct 27 15:35:41.767: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-6329 --namespace=crd-publish-openapi-6329 create -f -' +Oct 27 15:35:41.953: INFO: rc: 1 +Oct 27 15:35:41.953: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-6329 --namespace=crd-publish-openapi-6329 apply -f -' +Oct 27 15:35:42.131: INFO: rc: 1 +STEP: kubectl explain works to explain CR properties +Oct 27 15:35:42.132: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-6329 explain e2e-test-crd-publish-openapi-522-crds' +Oct 27 15:35:42.317: INFO: stderr: "" +Oct 27 15:35:42.317: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-522-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" +STEP: kubectl explain works to explain CR properties recursively +Oct 27 15:35:42.318: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-6329 explain e2e-test-crd-publish-openapi-522-crds.metadata' +Oct 27 15:35:42.497: INFO: stderr: "" +Oct 27 15:35:42.497: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-522-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n NOT return a 409 - instead, it will either return 201 Created or 500 with\n Reason ServerTimeout indicating a unique name could not be found in the\n time allotted, and the client should retry (optionally after the time\n indicated in the Retry-After header).\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only.\n\n DEPRECATED Kubernetes will stop propagating this field in 1.20 release and\n the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" +Oct 27 15:35:42.497: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-6329 explain e2e-test-crd-publish-openapi-522-crds.spec' +Oct 27 15:35:42.677: INFO: stderr: "" +Oct 27 15:35:42.677: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-522-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" +Oct 27 15:35:42.677: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-6329 explain e2e-test-crd-publish-openapi-522-crds.spec.bars' +Oct 27 15:35:42.862: INFO: stderr: "" +Oct 27 15:35:42.862: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-522-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" +STEP: kubectl explain works to return error when explain is called on property that doesn't exist +Oct 27 15:35:42.862: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-6329 explain e2e-test-crd-publish-openapi-522-crds.spec.bars2' +Oct 27 15:35:43.047: INFO: rc: 1 +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:35:46.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-6329" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":346,"completed":291,"skipped":4960,"failed":0} +SSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:35:46.762: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-4215 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name projected-configmap-test-volume-687c1627-7c1b-4286-bc46-6d349a17e3f1 +STEP: Creating a pod to test consume configMaps +Oct 27 15:35:46.981: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f778704b-3f4c-4311-ab38-55f3767a9b98" in namespace "projected-4215" to be "Succeeded or Failed" +Oct 27 15:35:46.992: INFO: Pod "pod-projected-configmaps-f778704b-3f4c-4311-ab38-55f3767a9b98": Phase="Pending", Reason="", readiness=false. Elapsed: 11.129354ms +Oct 27 15:35:49.005: INFO: Pod "pod-projected-configmaps-f778704b-3f4c-4311-ab38-55f3767a9b98": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.024056517s +STEP: Saw pod success +Oct 27 15:35:49.005: INFO: Pod "pod-projected-configmaps-f778704b-3f4c-4311-ab38-55f3767a9b98" satisfied condition "Succeeded or Failed" +Oct 27 15:35:49.017: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod pod-projected-configmaps-f778704b-3f4c-4311-ab38-55f3767a9b98 container agnhost-container: +STEP: delete the pod +Oct 27 15:35:49.101: INFO: Waiting for pod pod-projected-configmaps-f778704b-3f4c-4311-ab38-55f3767a9b98 to disappear +Oct 27 15:35:49.112: INFO: Pod pod-projected-configmaps-f778704b-3f4c-4311-ab38-55f3767a9b98 no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:35:49.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-4215" for this suite. +•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":292,"skipped":4969,"failed":0} +SSSSS +------------------------------ +[sig-storage] Projected combined + should project all components that make up the projection API [Projection][NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected combined + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:35:49.151: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-6174 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name configmap-projected-all-test-volume-7a4d734a-0f81-4758-9fd7-1fc0c7b2c7f9 +STEP: Creating secret with name secret-projected-all-test-volume-9630a02e-f1f0-4899-a892-3f48a5dc0bb0 +STEP: Creating a pod to test Check all projections for projected volume plugin +Oct 27 15:35:49.380: INFO: Waiting up to 5m0s for pod "projected-volume-a4b4955b-9a24-4582-8ccd-18fc1503405c" in namespace "projected-6174" to be "Succeeded or Failed" +Oct 27 15:35:49.392: INFO: Pod "projected-volume-a4b4955b-9a24-4582-8ccd-18fc1503405c": Phase="Pending", Reason="", readiness=false. Elapsed: 11.467096ms +Oct 27 15:35:51.404: INFO: Pod "projected-volume-a4b4955b-9a24-4582-8ccd-18fc1503405c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0237238s +Oct 27 15:35:53.417: INFO: Pod "projected-volume-a4b4955b-9a24-4582-8ccd-18fc1503405c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036961357s +STEP: Saw pod success +Oct 27 15:35:53.417: INFO: Pod "projected-volume-a4b4955b-9a24-4582-8ccd-18fc1503405c" satisfied condition "Succeeded or Failed" +Oct 27 15:35:53.429: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod projected-volume-a4b4955b-9a24-4582-8ccd-18fc1503405c container projected-all-volume-test: +STEP: delete the pod +Oct 27 15:35:53.533: INFO: Waiting for pod projected-volume-a4b4955b-9a24-4582-8ccd-18fc1503405c to disappear +Oct 27 15:35:53.544: INFO: Pod projected-volume-a4b4955b-9a24-4582-8ccd-18fc1503405c no longer exists +[AfterEach] [sig-storage] Projected combined + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:35:53.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-6174" for this suite. +•{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":346,"completed":293,"skipped":4974,"failed":0} +SSS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with projected pod [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:35:53.578: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename subpath +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in subpath-4591 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] Atomic writer volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 +STEP: Setting up data +[It] should support subpaths with projected pod [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating pod pod-subpath-test-projected-b9h5 +STEP: Creating a pod to test atomic-volume-subpath +Oct 27 15:35:53.806: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-b9h5" in namespace "subpath-4591" to be "Succeeded or Failed" +Oct 27 15:35:53.818: INFO: Pod "pod-subpath-test-projected-b9h5": Phase="Pending", Reason="", readiness=false. Elapsed: 11.958007ms +Oct 27 15:35:55.830: INFO: Pod "pod-subpath-test-projected-b9h5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024352289s +Oct 27 15:35:57.844: INFO: Pod "pod-subpath-test-projected-b9h5": Phase="Running", Reason="", readiness=true. Elapsed: 4.037740221s +Oct 27 15:35:59.856: INFO: Pod "pod-subpath-test-projected-b9h5": Phase="Running", Reason="", readiness=true. Elapsed: 6.049920839s +Oct 27 15:36:01.870: INFO: Pod "pod-subpath-test-projected-b9h5": Phase="Running", Reason="", readiness=true. Elapsed: 8.06389759s +Oct 27 15:36:03.884: INFO: Pod "pod-subpath-test-projected-b9h5": Phase="Running", Reason="", readiness=true. Elapsed: 10.07768459s +Oct 27 15:36:05.897: INFO: Pod "pod-subpath-test-projected-b9h5": Phase="Running", Reason="", readiness=true. Elapsed: 12.091129788s +Oct 27 15:36:07.910: INFO: Pod "pod-subpath-test-projected-b9h5": Phase="Running", Reason="", readiness=true. Elapsed: 14.104213041s +Oct 27 15:36:09.923: INFO: Pod "pod-subpath-test-projected-b9h5": Phase="Running", Reason="", readiness=true. Elapsed: 16.117132131s +Oct 27 15:36:11.937: INFO: Pod "pod-subpath-test-projected-b9h5": Phase="Running", Reason="", readiness=true. Elapsed: 18.130793034s +Oct 27 15:36:13.950: INFO: Pod "pod-subpath-test-projected-b9h5": Phase="Running", Reason="", readiness=true. Elapsed: 20.144316553s +Oct 27 15:36:15.963: INFO: Pod "pod-subpath-test-projected-b9h5": Phase="Running", Reason="", readiness=true. Elapsed: 22.156599672s +Oct 27 15:36:17.976: INFO: Pod "pod-subpath-test-projected-b9h5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.169972186s +STEP: Saw pod success +Oct 27 15:36:17.976: INFO: Pod "pod-subpath-test-projected-b9h5" satisfied condition "Succeeded or Failed" +Oct 27 15:36:17.988: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod pod-subpath-test-projected-b9h5 container test-container-subpath-projected-b9h5: +STEP: delete the pod +Oct 27 15:36:18.055: INFO: Waiting for pod pod-subpath-test-projected-b9h5 to disappear +Oct 27 15:36:18.066: INFO: Pod pod-subpath-test-projected-b9h5 no longer exists +STEP: Deleting pod pod-subpath-test-projected-b9h5 +Oct 27 15:36:18.066: INFO: Deleting pod "pod-subpath-test-projected-b9h5" in namespace "subpath-4591" +[AfterEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:36:18.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "subpath-4591" for this suite. +•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":346,"completed":294,"skipped":4977,"failed":0} +SS +------------------------------ +[sig-api-machinery] Watchers + should be able to restart watching from the last resource version observed by the previous watch [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:36:18.111: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename watch +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in watch-644 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a watch on configmaps +STEP: creating a new configmap +STEP: modifying the configmap once +STEP: closing the watch once it receives two notifications +Oct 27 15:36:18.341: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-644 21c0223f-46ad-4272-9a1c-d46244c10dbe 47903 0 2021-10-27 15:36:18 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-10-27 15:36:18 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 27 15:36:18.341: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-644 21c0223f-46ad-4272-9a1c-d46244c10dbe 47904 0 2021-10-27 15:36:18 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-10-27 15:36:18 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: modifying the configmap a second time, while the watch is closed +STEP: creating a new watch on configmaps from the last resource version observed by the first watch +STEP: deleting the configmap +STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed +Oct 27 15:36:18.386: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-644 21c0223f-46ad-4272-9a1c-d46244c10dbe 47905 0 2021-10-27 15:36:18 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-10-27 15:36:18 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 27 15:36:18.386: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-644 21c0223f-46ad-4272-9a1c-d46244c10dbe 47906 0 2021-10-27 15:36:18 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-10-27 15:36:18 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +[AfterEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:36:18.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "watch-644" for this suite. +•{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":346,"completed":295,"skipped":4979,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:36:18.414: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-5572 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating service in namespace services-5572 +Oct 27 15:36:18.627: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:36:20.639: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true) +Oct 27 15:36:20.651: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-5572 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' +Oct 27 15:36:21.212: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" +Oct 27 15:36:21.213: INFO: stdout: "iptables" +Oct 27 15:36:21.213: INFO: proxyMode: iptables +Oct 27 15:36:21.230: INFO: Waiting for pod kube-proxy-mode-detector to disappear +Oct 27 15:36:21.243: INFO: Pod kube-proxy-mode-detector no longer exists +STEP: creating service affinity-nodeport-timeout in namespace services-5572 +STEP: creating replication controller affinity-nodeport-timeout in namespace services-5572 +I1027 15:36:21.279749 5768 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-5572, replica count: 3 +I1027 15:36:24.330555 5768 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Oct 27 15:36:24.374: INFO: Creating new exec pod +Oct 27 15:36:27.435: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-5572 exec execpod-affinity7v8qt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' +Oct 27 15:36:27.942: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" +Oct 27 15:36:27.942: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 15:36:27.942: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-5572 exec execpod-affinity7v8qt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.65.24.233 80' +Oct 27 15:36:28.449: INFO: stderr: "+ nc -v -t -w 2 100.65.24.233 80\n+ echo hostName\nConnection to 100.65.24.233 80 port [tcp/http] succeeded!\n" +Oct 27 15:36:28.449: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 15:36:28.449: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-5572 exec execpod-affinity7v8qt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.250.0.5 30511' +Oct 27 15:36:28.893: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.250.0.5 30511\nConnection to 10.250.0.5 30511 port [tcp/*] succeeded!\n" +Oct 27 15:36:28.893: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 15:36:28.893: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-5572 exec execpod-affinity7v8qt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.250.0.4 30511' +Oct 27 15:36:29.391: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.250.0.4 30511\nConnection to 10.250.0.4 30511 port [tcp/*] succeeded!\n" +Oct 27 15:36:29.391: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 15:36:29.391: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-5572 exec execpod-affinity7v8qt -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.250.0.5:30511/ ; done' +Oct 27 15:36:30.016: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:30511/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:30511/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:30511/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:30511/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:30511/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:30511/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:30511/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:30511/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:30511/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:30511/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:30511/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:30511/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:30511/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:30511/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:30511/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.5:30511/\n" +Oct 27 15:36:30.016: INFO: stdout: "\naffinity-nodeport-timeout-f65zk\naffinity-nodeport-timeout-f65zk\naffinity-nodeport-timeout-f65zk\naffinity-nodeport-timeout-f65zk\naffinity-nodeport-timeout-f65zk\naffinity-nodeport-timeout-f65zk\naffinity-nodeport-timeout-f65zk\naffinity-nodeport-timeout-f65zk\naffinity-nodeport-timeout-f65zk\naffinity-nodeport-timeout-f65zk\naffinity-nodeport-timeout-f65zk\naffinity-nodeport-timeout-f65zk\naffinity-nodeport-timeout-f65zk\naffinity-nodeport-timeout-f65zk\naffinity-nodeport-timeout-f65zk\naffinity-nodeport-timeout-f65zk" +Oct 27 15:36:30.016: INFO: Received response from host: affinity-nodeport-timeout-f65zk +Oct 27 15:36:30.016: INFO: Received response from host: affinity-nodeport-timeout-f65zk +Oct 27 15:36:30.016: INFO: Received response from host: affinity-nodeport-timeout-f65zk +Oct 27 15:36:30.016: INFO: Received response from host: affinity-nodeport-timeout-f65zk +Oct 27 15:36:30.016: INFO: Received response from host: affinity-nodeport-timeout-f65zk +Oct 27 15:36:30.016: INFO: Received response from host: affinity-nodeport-timeout-f65zk +Oct 27 15:36:30.016: INFO: Received response from host: affinity-nodeport-timeout-f65zk +Oct 27 15:36:30.016: INFO: Received response from host: affinity-nodeport-timeout-f65zk +Oct 27 15:36:30.016: INFO: Received response from host: affinity-nodeport-timeout-f65zk +Oct 27 15:36:30.016: INFO: Received response from host: affinity-nodeport-timeout-f65zk +Oct 27 15:36:30.016: INFO: Received response from host: affinity-nodeport-timeout-f65zk +Oct 27 15:36:30.016: INFO: Received response from host: affinity-nodeport-timeout-f65zk +Oct 27 15:36:30.016: INFO: Received response from host: affinity-nodeport-timeout-f65zk +Oct 27 15:36:30.016: INFO: Received response from host: affinity-nodeport-timeout-f65zk +Oct 27 15:36:30.016: INFO: Received response from host: affinity-nodeport-timeout-f65zk +Oct 27 15:36:30.016: INFO: Received response from host: affinity-nodeport-timeout-f65zk +Oct 27 15:36:30.017: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-5572 exec execpod-affinity7v8qt -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.250.0.5:30511/' +Oct 27 15:36:30.514: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.250.0.5:30511/\n" +Oct 27 15:36:30.514: INFO: stdout: "affinity-nodeport-timeout-f65zk" +Oct 27 15:36:50.515: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-5572 exec execpod-affinity7v8qt -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.250.0.5:30511/' +Oct 27 15:36:51.004: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.250.0.5:30511/\n" +Oct 27 15:36:51.004: INFO: stdout: "affinity-nodeport-timeout-f65zk" +Oct 27 15:37:11.004: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-5572 exec execpod-affinity7v8qt -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.250.0.5:30511/' +Oct 27 15:37:11.564: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.250.0.5:30511/\n" +Oct 27 15:37:11.564: INFO: stdout: "affinity-nodeport-timeout-f65zk" +Oct 27 15:37:31.566: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-5572 exec execpod-affinity7v8qt -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.250.0.5:30511/' +Oct 27 15:37:32.101: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.250.0.5:30511/\n" +Oct 27 15:37:32.101: INFO: stdout: "affinity-nodeport-timeout-f65zk" +Oct 27 15:37:52.101: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-5572 exec execpod-affinity7v8qt -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.250.0.5:30511/' +Oct 27 15:37:52.586: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.250.0.5:30511/\n" +Oct 27 15:37:52.586: INFO: stdout: "affinity-nodeport-timeout-j28c5" +Oct 27 15:37:52.586: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-5572, will wait for the garbage collector to delete the pods +Oct 27 15:37:52.677: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 13.198275ms +Oct 27 15:37:52.778: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 101.010693ms +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:37:55.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-5572" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":346,"completed":296,"skipped":4993,"failed":0} +S +------------------------------ +[sig-node] Pods + should get a host IP [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:37:55.639: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename pods +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-8077 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:188 +[It] should get a host IP [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating pod +Oct 27 15:37:55.888: INFO: The status of Pod pod-hostip-63da73d6-2606-4ab6-978f-5c7bc721947b is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:37:57.901: INFO: The status of Pod pod-hostip-63da73d6-2606-4ab6-978f-5c7bc721947b is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:37:59.916: INFO: The status of Pod pod-hostip-63da73d6-2606-4ab6-978f-5c7bc721947b is Running (Ready = true) +Oct 27 15:38:00.024: INFO: Pod pod-hostip-63da73d6-2606-4ab6-978f-5c7bc721947b has hostIP: 10.250.0.4 +[AfterEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:38:00.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-8077" for this suite. +•{"msg":"PASSED [sig-node] Pods should get a host IP [NodeConformance] [Conformance]","total":346,"completed":297,"skipped":4994,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Namespaces [Serial] + should ensure that all pods are removed when a namespace is deleted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:38:00.059: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename namespaces +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in namespaces-134 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should ensure that all pods are removed when a namespace is deleted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a test namespace +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nsdeletetest-2163 +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Creating a pod in the namespace +STEP: Waiting for the pod to have running status +STEP: Deleting the namespace +STEP: Waiting for the namespace to be removed. +STEP: Recreating the namespace +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nsdeletetest-9858 +STEP: Verifying there are no pods in the namespace +[AfterEach] [sig-api-machinery] Namespaces [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:38:13.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "namespaces-134" for this suite. +STEP: Destroying namespace "nsdeletetest-2163" for this suite. +Oct 27 15:38:13.732: INFO: Namespace nsdeletetest-2163 was already deleted +STEP: Destroying namespace "nsdeletetest-9858" for this suite. +•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":346,"completed":298,"skipped":5122,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide podname only [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:38:13.745: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-4512 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should provide podname only [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 27 15:38:13.948: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bb1ab604-abb6-4b20-aae7-74e4aaf4bdac" in namespace "downward-api-4512" to be "Succeeded or Failed" +Oct 27 15:38:13.960: INFO: Pod "downwardapi-volume-bb1ab604-abb6-4b20-aae7-74e4aaf4bdac": Phase="Pending", Reason="", readiness=false. Elapsed: 11.163348ms +Oct 27 15:38:15.972: INFO: Pod "downwardapi-volume-bb1ab604-abb6-4b20-aae7-74e4aaf4bdac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023444065s +Oct 27 15:38:17.985: INFO: Pod "downwardapi-volume-bb1ab604-abb6-4b20-aae7-74e4aaf4bdac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036736113s +STEP: Saw pod success +Oct 27 15:38:17.985: INFO: Pod "downwardapi-volume-bb1ab604-abb6-4b20-aae7-74e4aaf4bdac" satisfied condition "Succeeded or Failed" +Oct 27 15:38:17.997: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod downwardapi-volume-bb1ab604-abb6-4b20-aae7-74e4aaf4bdac container client-container: +STEP: delete the pod +Oct 27 15:38:18.066: INFO: Waiting for pod downwardapi-volume-bb1ab604-abb6-4b20-aae7-74e4aaf4bdac to disappear +Oct 27 15:38:18.077: INFO: Pod downwardapi-volume-bb1ab604-abb6-4b20-aae7-74e4aaf4bdac no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:38:18.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-4512" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":346,"completed":299,"skipped":5137,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:38:18.117: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-4323 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name projected-configmap-test-volume-map-daeb1c81-0b25-45b1-bcd1-846af8fd9358 +STEP: Creating a pod to test consume configMaps +Oct 27 15:38:18.333: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-49d30186-efa9-45f2-8fee-bbddcace13ca" in namespace "projected-4323" to be "Succeeded or Failed" +Oct 27 15:38:18.344: INFO: Pod "pod-projected-configmaps-49d30186-efa9-45f2-8fee-bbddcace13ca": Phase="Pending", Reason="", readiness=false. Elapsed: 10.985312ms +Oct 27 15:38:20.356: INFO: Pod "pod-projected-configmaps-49d30186-efa9-45f2-8fee-bbddcace13ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023540934s +Oct 27 15:38:22.369: INFO: Pod "pod-projected-configmaps-49d30186-efa9-45f2-8fee-bbddcace13ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036430962s +STEP: Saw pod success +Oct 27 15:38:22.369: INFO: Pod "pod-projected-configmaps-49d30186-efa9-45f2-8fee-bbddcace13ca" satisfied condition "Succeeded or Failed" +Oct 27 15:38:22.381: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod pod-projected-configmaps-49d30186-efa9-45f2-8fee-bbddcace13ca container agnhost-container: +STEP: delete the pod +Oct 27 15:38:22.447: INFO: Waiting for pod pod-projected-configmaps-49d30186-efa9-45f2-8fee-bbddcace13ca to disappear +Oct 27 15:38:22.458: INFO: Pod pod-projected-configmaps-49d30186-efa9-45f2-8fee-bbddcace13ca no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:38:22.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-4323" for this suite. +•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":346,"completed":300,"skipped":5149,"failed":0} +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Docker Containers + should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Docker Containers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:38:22.493: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename containers +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in containers-887 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test override arguments +Oct 27 15:38:22.701: INFO: Waiting up to 5m0s for pod "client-containers-8eac060f-c7ec-4216-8b8f-9c2363f62639" in namespace "containers-887" to be "Succeeded or Failed" +Oct 27 15:38:22.712: INFO: Pod "client-containers-8eac060f-c7ec-4216-8b8f-9c2363f62639": Phase="Pending", Reason="", readiness=false. Elapsed: 11.513786ms +Oct 27 15:38:24.726: INFO: Pod "client-containers-8eac060f-c7ec-4216-8b8f-9c2363f62639": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025107619s +Oct 27 15:38:26.739: INFO: Pod "client-containers-8eac060f-c7ec-4216-8b8f-9c2363f62639": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038025267s +STEP: Saw pod success +Oct 27 15:38:26.739: INFO: Pod "client-containers-8eac060f-c7ec-4216-8b8f-9c2363f62639" satisfied condition "Succeeded or Failed" +Oct 27 15:38:26.750: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod client-containers-8eac060f-c7ec-4216-8b8f-9c2363f62639 container agnhost-container: +STEP: delete the pod +Oct 27 15:38:26.812: INFO: Waiting for pod client-containers-8eac060f-c7ec-4216-8b8f-9c2363f62639 to disappear +Oct 27 15:38:26.823: INFO: Pod client-containers-8eac060f-c7ec-4216-8b8f-9c2363f62639 no longer exists +[AfterEach] [sig-node] Docker Containers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:38:26.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "containers-887" for this suite. +•{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":346,"completed":301,"skipped":5167,"failed":0} +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Security Context when creating containers with AllowPrivilegeEscalation + should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:38:26.857: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename security-context-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-test-3975 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 +[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:38:27.059: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-4486ce55-d0af-4e7a-bae9-9f1beb1b4abf" in namespace "security-context-test-3975" to be "Succeeded or Failed" +Oct 27 15:38:27.070: INFO: Pod "alpine-nnp-false-4486ce55-d0af-4e7a-bae9-9f1beb1b4abf": Phase="Pending", Reason="", readiness=false. Elapsed: 10.895428ms +Oct 27 15:38:29.083: INFO: Pod "alpine-nnp-false-4486ce55-d0af-4e7a-bae9-9f1beb1b4abf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023938097s +Oct 27 15:38:31.095: INFO: Pod "alpine-nnp-false-4486ce55-d0af-4e7a-bae9-9f1beb1b4abf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035897329s +Oct 27 15:38:33.107: INFO: Pod "alpine-nnp-false-4486ce55-d0af-4e7a-bae9-9f1beb1b4abf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.048589525s +Oct 27 15:38:33.107: INFO: Pod "alpine-nnp-false-4486ce55-d0af-4e7a-bae9-9f1beb1b4abf" satisfied condition "Succeeded or Failed" +[AfterEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:38:33.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "security-context-test-3975" for this suite. +•{"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":302,"skipped":5189,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should be able to change the type from ExternalName to ClusterIP [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:38:33.235: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-2520 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should be able to change the type from ExternalName to ClusterIP [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a service externalname-service with the type=ExternalName in namespace services-2520 +STEP: changing the ExternalName service to type=ClusterIP +STEP: creating replication controller externalname-service in namespace services-2520 +I1027 15:38:33.484443 5768 runners.go:190] Created replication controller with name: externalname-service, namespace: services-2520, replica count: 2 +I1027 15:38:36.535573 5768 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Oct 27 15:38:36.535: INFO: Creating new exec pod +Oct 27 15:38:41.577: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-2520 exec execpodf5b62 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' +Oct 27 15:38:42.165: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" +Oct 27 15:38:42.165: INFO: stdout: "" +Oct 27 15:38:43.166: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-2520 exec execpodf5b62 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' +Oct 27 15:38:43.655: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" +Oct 27 15:38:43.655: INFO: stdout: "" +Oct 27 15:38:44.166: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-2520 exec execpodf5b62 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' +Oct 27 15:38:44.655: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" +Oct 27 15:38:44.655: INFO: stdout: "" +Oct 27 15:38:45.165: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-2520 exec execpodf5b62 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' +Oct 27 15:38:45.757: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" +Oct 27 15:38:45.757: INFO: stdout: "" +Oct 27 15:38:46.166: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-2520 exec execpodf5b62 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' +Oct 27 15:38:46.662: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" +Oct 27 15:38:46.662: INFO: stdout: "" +Oct 27 15:38:47.166: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-2520 exec execpodf5b62 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' +Oct 27 15:38:47.658: INFO: stderr: "+ + ncecho -v hostName -t\n -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" +Oct 27 15:38:47.658: INFO: stdout: "" +Oct 27 15:38:48.165: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-2520 exec execpodf5b62 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' +Oct 27 15:38:48.677: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" +Oct 27 15:38:48.677: INFO: stdout: "externalname-service-hmdcf" +Oct 27 15:38:48.677: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-2520 exec execpodf5b62 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.64.217.185 80' +Oct 27 15:38:49.260: INFO: stderr: "+ nc -v -t -w 2 100.64.217.185 80\n+ echo hostName\nConnection to 100.64.217.185 80 port [tcp/http] succeeded!\n" +Oct 27 15:38:49.260: INFO: stdout: "" +Oct 27 15:38:50.261: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-2520 exec execpodf5b62 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.64.217.185 80' +Oct 27 15:38:50.728: INFO: stderr: "+ nc -v -t -w 2 100.64.217.185 80\nConnection to 100.64.217.185 80 port [tcp/http] succeeded!\n+ echo hostName\n" +Oct 27 15:38:50.728: INFO: stdout: "" +Oct 27 15:38:51.260: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-2520 exec execpodf5b62 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.64.217.185 80' +Oct 27 15:38:51.747: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 100.64.217.185 80\nConnection to 100.64.217.185 80 port [tcp/http] succeeded!\n" +Oct 27 15:38:51.747: INFO: stdout: "externalname-service-b8nw8" +Oct 27 15:38:51.747: INFO: Cleaning up the ExternalName to ClusterIP test service +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:38:51.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-2520" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":346,"completed":303,"skipped":5215,"failed":0} +SSS +------------------------------ +[sig-scheduling] SchedulerPreemption [Serial] + validates basic preemption works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:38:51.803: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename sched-preemption +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-preemption-9558 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 +Oct 27 15:38:52.030: INFO: Waiting up to 1m0s for all nodes to be ready +Oct 27 15:39:52.139: INFO: Waiting for terminating namespaces to be deleted... +[It] validates basic preemption works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Create pods that use 4/5 of node resources. +Oct 27 15:39:52.189: INFO: Created pod: pod0-0-sched-preemption-low-priority +Oct 27 15:39:52.205: INFO: Created pod: pod0-1-sched-preemption-medium-priority +Oct 27 15:39:52.245: INFO: Created pod: pod1-0-sched-preemption-medium-priority +Oct 27 15:39:52.261: INFO: Created pod: pod1-1-sched-preemption-medium-priority +STEP: Wait for pods to be scheduled. +STEP: Run a high priority pod that has same requirements as that of lower priority pod +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:40:02.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-preemption-9558" for this suite. +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 +•{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":346,"completed":304,"skipped":5218,"failed":0} +SSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:40:02.536: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-1100 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name configmap-test-volume-af677079-c561-4034-a226-88ce59abf4bc +STEP: Creating a pod to test consume configMaps +Oct 27 15:40:02.914: INFO: Waiting up to 5m0s for pod "pod-configmaps-3a795221-51e6-4c7a-a7f8-d397b7a15d3e" in namespace "configmap-1100" to be "Succeeded or Failed" +Oct 27 15:40:03.012: INFO: Pod "pod-configmaps-3a795221-51e6-4c7a-a7f8-d397b7a15d3e": Phase="Pending", Reason="", readiness=false. Elapsed: 98.93926ms +Oct 27 15:40:05.025: INFO: Pod "pod-configmaps-3a795221-51e6-4c7a-a7f8-d397b7a15d3e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.11111583s +STEP: Saw pod success +Oct 27 15:40:05.025: INFO: Pod "pod-configmaps-3a795221-51e6-4c7a-a7f8-d397b7a15d3e" satisfied condition "Succeeded or Failed" +Oct 27 15:40:05.037: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod pod-configmaps-3a795221-51e6-4c7a-a7f8-d397b7a15d3e container agnhost-container: +STEP: delete the pod +Oct 27 15:40:05.101: INFO: Waiting for pod pod-configmaps-3a795221-51e6-4c7a-a7f8-d397b7a15d3e to disappear +Oct 27 15:40:05.111: INFO: Pod pod-configmaps-3a795221-51e6-4c7a-a7f8-d397b7a15d3e no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:40:05.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-1100" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":305,"skipped":5223,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicationController + should adopt matching pods on creation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:40:05.151: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename replication-controller +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replication-controller-228 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 +[It] should adopt matching pods on creation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Given a Pod with a 'name' label pod-adoption is created +Oct 27 15:40:05.365: INFO: The status of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:40:07.378: INFO: The status of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:40:09.377: INFO: The status of Pod pod-adoption is Running (Ready = true) +STEP: When a replication controller with a matching selector is created +STEP: Then the orphan pod is adopted +[AfterEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:40:09.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replication-controller-228" for this suite. +•{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":346,"completed":306,"skipped":5247,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces + should list and delete a collection of PodDisruptionBudgets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:40:09.453: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename disruption +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in disruption-1803 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 +[BeforeEach] Listing PodDisruptionBudgets for all namespaces + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:40:09.634: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename disruption-2 +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in disruption-2-9288 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should list and delete a collection of PodDisruptionBudgets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Waiting for the pdb to be processed +STEP: Waiting for the pdb to be processed +STEP: Waiting for the pdb to be processed +STEP: listing a collection of PDBs across all namespaces +STEP: listing a collection of PDBs in namespace disruption-1803 +STEP: deleting a collection of PDBs +STEP: Waiting for the PDB collection to be deleted +[AfterEach] Listing PodDisruptionBudgets for all namespaces + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:40:09.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "disruption-2-9288" for this suite. +[AfterEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:40:09.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "disruption-1803" for this suite. +•{"msg":"PASSED [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]","total":346,"completed":307,"skipped":5259,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:40:10.000: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-7623 +STEP: Waiting for a default service account to be provisioned in namespace +[It] optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating secret with name s-test-opt-del-19ac0d79-796f-46c8-8f14-309e5f6f6c1a +STEP: Creating secret with name s-test-opt-upd-f244b47f-ab46-4716-a5d0-8cb13d89bfc4 +STEP: Creating the pod +Oct 27 15:40:10.246: INFO: The status of Pod pod-projected-secrets-32fcb1f8-34cf-4c7a-acfe-a6402bd1f05a is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:40:12.260: INFO: The status of Pod pod-projected-secrets-32fcb1f8-34cf-4c7a-acfe-a6402bd1f05a is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:40:14.260: INFO: The status of Pod pod-projected-secrets-32fcb1f8-34cf-4c7a-acfe-a6402bd1f05a is Running (Ready = true) +STEP: Deleting secret s-test-opt-del-19ac0d79-796f-46c8-8f14-309e5f6f6c1a +STEP: Updating secret s-test-opt-upd-f244b47f-ab46-4716-a5d0-8cb13d89bfc4 +STEP: Creating secret with name s-test-opt-create-e19461ca-2091-43a3-ba9e-81fd6427ab76 +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:41:34.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-7623" for this suite. +•{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":308,"skipped":5285,"failed":0} +SSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:41:34.652: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-6884 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name projected-configmap-test-volume-map-20852ba8-a156-4623-bd39-7751d9e45382 +STEP: Creating a pod to test consume configMaps +Oct 27 15:41:34.870: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7afbd4cb-4262-4b98-89a0-b534288344dd" in namespace "projected-6884" to be "Succeeded or Failed" +Oct 27 15:41:34.880: INFO: Pod "pod-projected-configmaps-7afbd4cb-4262-4b98-89a0-b534288344dd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.725499ms +Oct 27 15:41:36.892: INFO: Pod "pod-projected-configmaps-7afbd4cb-4262-4b98-89a0-b534288344dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022786757s +Oct 27 15:41:38.906: INFO: Pod "pod-projected-configmaps-7afbd4cb-4262-4b98-89a0-b534288344dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035969358s +STEP: Saw pod success +Oct 27 15:41:38.906: INFO: Pod "pod-projected-configmaps-7afbd4cb-4262-4b98-89a0-b534288344dd" satisfied condition "Succeeded or Failed" +Oct 27 15:41:38.917: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod pod-projected-configmaps-7afbd4cb-4262-4b98-89a0-b534288344dd container agnhost-container: +STEP: delete the pod +Oct 27 15:41:39.033: INFO: Waiting for pod pod-projected-configmaps-7afbd4cb-4262-4b98-89a0-b534288344dd to disappear +Oct 27 15:41:39.044: INFO: Pod pod-projected-configmaps-7afbd4cb-4262-4b98-89a0-b534288344dd no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:41:39.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-6884" for this suite. +•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":346,"completed":309,"skipped":5293,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be immutable if `immutable` field is set [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:41:39.077: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-8683 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be immutable if `immutable` field is set [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:41:39.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-8683" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","total":346,"completed":310,"skipped":5303,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] InitContainer [NodeConformance] + should invoke init containers on a RestartAlways pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:41:39.402: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename init-container +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in init-container-3050 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 +[It] should invoke init containers on a RestartAlways pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pod +Oct 27 15:41:39.583: INFO: PodSpec: initContainers in spec.initContainers +[AfterEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:41:44.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "init-container-3050" for this suite. +•{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":346,"completed":311,"skipped":5354,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl cluster-info + should check if Kubernetes control plane services is included in cluster-info [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:41:44.440: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-8904 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should check if Kubernetes control plane services is included in cluster-info [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: validating cluster-info +Oct 27 15:41:44.619: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-8904 cluster-info' +Oct 27 15:41:44.715: INFO: stderr: "" +Oct 27 15:41:44.716: INFO: stdout: "\x1b[0;32mKubernetes control plane\x1b[0m is running at \x1b[0;33mhttps://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:41:44.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-8904" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info [Conformance]","total":346,"completed":312,"skipped":5445,"failed":0} +SSSSSSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + Should recreate evicted statefulset [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:41:44.740: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename statefulset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-5089 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:92 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:107 +STEP: Creating service test in namespace statefulset-5089 +[It] Should recreate evicted statefulset [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Looking for a node to schedule stateful set and pod +STEP: Creating pod with conflicting port in namespace statefulset-5089 +STEP: Waiting until pod test-pod will start running in namespace statefulset-5089 +STEP: Creating statefulset with conflicting port in namespace statefulset-5089 +STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-5089 +Oct 27 15:41:49.068: INFO: Observed stateful pod in namespace: statefulset-5089, name: ss-0, uid: 12f08091-a246-4830-a7ca-ee4d6f7212eb, status phase: Pending. Waiting for statefulset controller to delete. +Oct 27 15:41:49.112: INFO: Observed stateful pod in namespace: statefulset-5089, name: ss-0, uid: 12f08091-a246-4830-a7ca-ee4d6f7212eb, status phase: Failed. Waiting for statefulset controller to delete. +Oct 27 15:41:49.117: INFO: Observed stateful pod in namespace: statefulset-5089, name: ss-0, uid: 12f08091-a246-4830-a7ca-ee4d6f7212eb, status phase: Failed. Waiting for statefulset controller to delete. +Oct 27 15:41:49.123: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-5089 +STEP: Removing pod with conflicting port in namespace statefulset-5089 +STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-5089 and will be in running state +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118 +Oct 27 15:41:53.179: INFO: Deleting all statefulset in ns statefulset-5089 +Oct 27 15:41:53.190: INFO: Scaling statefulset ss to 0 +Oct 27 15:42:03.312: INFO: Waiting for statefulset status.replicas updated to 0 +Oct 27 15:42:03.412: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:42:03.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-5089" for this suite. +•{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":346,"completed":313,"skipped":5453,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-apps] ReplicationController + should test the lifecycle of a ReplicationController [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:42:03.555: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename replication-controller +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replication-controller-8066 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 +[It] should test the lifecycle of a ReplicationController [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a ReplicationController +STEP: waiting for RC to be added +STEP: waiting for available Replicas +STEP: patching ReplicationController +STEP: waiting for RC to be modified +STEP: patching ReplicationController status +STEP: waiting for RC to be modified +STEP: waiting for available Replicas +STEP: fetching ReplicationController status +STEP: patching ReplicationController scale +STEP: waiting for RC to be modified +STEP: waiting for ReplicationController's scale to be the max amount +STEP: fetching ReplicationController; ensuring that it's patched +STEP: updating ReplicationController status +STEP: waiting for RC to be modified +STEP: listing all ReplicationControllers +STEP: checking that ReplicationController has expected values +STEP: deleting ReplicationControllers by collection +STEP: waiting for ReplicationController to have a DELETED watchEvent +[AfterEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:42:08.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replication-controller-8066" for this suite. +•{"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","total":346,"completed":314,"skipped":5463,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] NoExecuteTaintManager Single Pod [Serial] + removing taint cancels eviction [Disruptive] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:42:08.613: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename taint-single-pod +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in taint-single-pod-6208 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/taints.go:164 +Oct 27 15:42:08.793: INFO: Waiting up to 1m0s for all nodes to be ready +Oct 27 15:43:08.895: INFO: Waiting for terminating namespaces to be deleted... +[It] removing taint cancels eviction [Disruptive] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:43:08.906: INFO: Starting informer... +STEP: Starting pod... +Oct 27 15:43:08.936: INFO: Pod is running on shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2. Tainting Node +STEP: Trying to apply a taint on the Node +STEP: verifying the node has the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute +STEP: Waiting short time to make sure Pod is queued for deletion +Oct 27 15:43:08.975: INFO: Pod wasn't evicted. Proceeding +Oct 27 15:43:08.975: INFO: Removing taint from Node +STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute +STEP: Waiting some time to make sure that toleration time passed. +Oct 27 15:44:24.019: INFO: Pod wasn't evicted. Test successful +[AfterEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:44:24.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "taint-single-pod-6208" for this suite. +•{"msg":"PASSED [sig-node] NoExecuteTaintManager Single Pod [Serial] removing taint cancels eviction [Disruptive] [Conformance]","total":346,"completed":315,"skipped":5501,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:44:24.053: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-1860 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0644 on node default medium +Oct 27 15:44:24.255: INFO: Waiting up to 5m0s for pod "pod-d1b1ddae-11fd-4248-b472-613a580c5cca" in namespace "emptydir-1860" to be "Succeeded or Failed" +Oct 27 15:44:24.265: INFO: Pod "pod-d1b1ddae-11fd-4248-b472-613a580c5cca": Phase="Pending", Reason="", readiness=false. Elapsed: 10.772337ms +Oct 27 15:44:26.278: INFO: Pod "pod-d1b1ddae-11fd-4248-b472-613a580c5cca": Phase="Running", Reason="", readiness=true. Elapsed: 2.023271446s +Oct 27 15:44:28.291: INFO: Pod "pod-d1b1ddae-11fd-4248-b472-613a580c5cca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036126162s +STEP: Saw pod success +Oct 27 15:44:28.291: INFO: Pod "pod-d1b1ddae-11fd-4248-b472-613a580c5cca" satisfied condition "Succeeded or Failed" +Oct 27 15:44:28.302: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod pod-d1b1ddae-11fd-4248-b472-613a580c5cca container test-container: +STEP: delete the pod +Oct 27 15:44:28.366: INFO: Waiting for pod pod-d1b1ddae-11fd-4248-b472-613a580c5cca to disappear +Oct 27 15:44:28.377: INFO: Pod pod-d1b1ddae-11fd-4248-b472-613a580c5cca no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:44:28.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-1860" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":316,"skipped":5533,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Deployment + RecreateDeployment should delete old pods and create new ones [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:44:28.411: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename deployment +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in deployment-2402 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 +[It] RecreateDeployment should delete old pods and create new ones [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:44:28.597: INFO: Creating deployment "test-recreate-deployment" +Oct 27 15:44:28.609: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 +Oct 27 15:44:28.634: INFO: Waiting deployment "test-recreate-deployment" to complete +Oct 27 15:44:28.647: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770946268, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770946268, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770946268, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770946268, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6cb8b65c46\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 27 15:44:30.659: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770946268, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770946268, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770946268, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770946268, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6cb8b65c46\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 27 15:44:32.659: INFO: Triggering a new rollout for deployment "test-recreate-deployment" +Oct 27 15:44:32.683: INFO: Updating deployment test-recreate-deployment +Oct 27 15:44:32.683: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 +Oct 27 15:44:32.751: INFO: Deployment "test-recreate-deployment": +&Deployment{ObjectMeta:{test-recreate-deployment deployment-2402 1d8f3d55-5f3c-426a-9f4d-7b35cde3ca14 51407 2 2021-10-27 15:44:28 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-10-27 15:44:32 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 15:44:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004e82da8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2021-10-27 15:44:32 +0000 UTC,LastTransitionTime:2021-10-27 15:44:32 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-85d47dcb4" is progressing.,LastUpdateTime:2021-10-27 15:44:32 +0000 UTC,LastTransitionTime:2021-10-27 15:44:28 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} + +Oct 27 15:44:32.765: INFO: New ReplicaSet "test-recreate-deployment-85d47dcb4" of Deployment "test-recreate-deployment": +&ReplicaSet{ObjectMeta:{test-recreate-deployment-85d47dcb4 deployment-2402 2af7847f-e211-47af-9186-f13ec234f879 51406 1 2021-10-27 15:44:32 +0000 UTC map[name:sample-pod-3 pod-template-hash:85d47dcb4] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 1d8f3d55-5f3c-426a-9f4d-7b35cde3ca14 0xc004e83260 0xc004e83261}] [] [{kube-controller-manager Update apps/v1 2021-10-27 15:44:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1d8f3d55-5f3c-426a-9f4d-7b35cde3ca14\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 15:44:32 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 85d47dcb4,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:85d47dcb4] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004e832f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Oct 27 15:44:32.765: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": +Oct 27 15:44:32.765: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-6cb8b65c46 deployment-2402 73d0f3df-629e-4ea3-a99b-9ca92e43528d 51399 2 2021-10-27 15:44:28 +0000 UTC map[name:sample-pod-3 pod-template-hash:6cb8b65c46] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 1d8f3d55-5f3c-426a-9f4d-7b35cde3ca14 0xc004e83147 0xc004e83148}] [] [{kube-controller-manager Update apps/v1 2021-10-27 15:44:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1d8f3d55-5f3c-426a-9f4d-7b35cde3ca14\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 15:44:32 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6cb8b65c46,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:6cb8b65c46] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004e831f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Oct 27 15:44:32.776: INFO: Pod "test-recreate-deployment-85d47dcb4-wdc69" is not available: +&Pod{ObjectMeta:{test-recreate-deployment-85d47dcb4-wdc69 test-recreate-deployment-85d47dcb4- deployment-2402 cb00b019-1cd0-468f-ab80-c3e0d8c1dd9a 51408 0 2021-10-27 15:44:32 +0000 UTC map[name:sample-pod-3 pod-template-hash:85d47dcb4] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet test-recreate-deployment-85d47dcb4 2af7847f-e211-47af-9186-f13ec234f879 0xc004e83750 0xc004e83751}] [] [{kube-controller-manager Update v1 2021-10-27 15:44:32 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2af7847f-e211-47af-9186-f13ec234f879\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:44:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-wcqr8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmgxs-skc.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wcqr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:44:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:44:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:44:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:44:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.4,PodIP:,StartTime:2021-10-27 15:44:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:44:32.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-2402" for this suite. +•{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":346,"completed":317,"skipped":5589,"failed":0} +SSSSSSS +------------------------------ +[sig-node] Variable Expansion + should allow composing env vars into new env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:44:32.809: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename var-expansion +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in var-expansion-8779 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should allow composing env vars into new env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test env composition +Oct 27 15:44:33.015: INFO: Waiting up to 5m0s for pod "var-expansion-14b94426-289b-49db-b3d8-164323830af7" in namespace "var-expansion-8779" to be "Succeeded or Failed" +Oct 27 15:44:33.026: INFO: Pod "var-expansion-14b94426-289b-49db-b3d8-164323830af7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.923309ms +Oct 27 15:44:35.039: INFO: Pod "var-expansion-14b94426-289b-49db-b3d8-164323830af7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.023593342s +STEP: Saw pod success +Oct 27 15:44:35.039: INFO: Pod "var-expansion-14b94426-289b-49db-b3d8-164323830af7" satisfied condition "Succeeded or Failed" +Oct 27 15:44:35.051: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod var-expansion-14b94426-289b-49db-b3d8-164323830af7 container dapi-container: +STEP: delete the pod +Oct 27 15:44:35.123: INFO: Waiting for pod var-expansion-14b94426-289b-49db-b3d8-164323830af7 to disappear +Oct 27 15:44:35.134: INFO: Pod var-expansion-14b94426-289b-49db-b3d8-164323830af7 no longer exists +[AfterEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:44:35.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-8779" for this suite. +•{"msg":"PASSED [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":346,"completed":318,"skipped":5596,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-auth] Certificates API [Privileged:ClusterAdmin] + should support CSR API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:44:35.167: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename certificates +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in certificates-6342 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support CSR API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: getting /apis +STEP: getting /apis/certificates.k8s.io +STEP: getting /apis/certificates.k8s.io/v1 +STEP: creating +STEP: getting +STEP: listing +STEP: watching +Oct 27 15:44:36.184: INFO: starting watch +STEP: patching +STEP: updating +Oct 27 15:44:36.221: INFO: waiting for watch events with expected annotations +Oct 27 15:44:36.221: INFO: saw patched and updated annotations +STEP: getting /approval +STEP: patching /approval +STEP: updating /approval +STEP: getting /status +STEP: patching /status +STEP: updating /status +STEP: deleting +STEP: deleting a collection +[AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:44:36.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "certificates-6342" for this suite. +•{"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":346,"completed":319,"skipped":5632,"failed":0} +SSSSSS +------------------------------ +[sig-storage] Downward API volume + should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:44:36.375: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-1537 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 27 15:44:36.574: INFO: Waiting up to 5m0s for pod "downwardapi-volume-01f2aa8e-747b-43f7-ac18-576cd183c0c2" in namespace "downward-api-1537" to be "Succeeded or Failed" +Oct 27 15:44:36.586: INFO: Pod "downwardapi-volume-01f2aa8e-747b-43f7-ac18-576cd183c0c2": Phase="Pending", Reason="", readiness=false. Elapsed: 11.394597ms +Oct 27 15:44:38.598: INFO: Pod "downwardapi-volume-01f2aa8e-747b-43f7-ac18-576cd183c0c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024085031s +Oct 27 15:44:40.611: INFO: Pod "downwardapi-volume-01f2aa8e-747b-43f7-ac18-576cd183c0c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036598877s +STEP: Saw pod success +Oct 27 15:44:40.611: INFO: Pod "downwardapi-volume-01f2aa8e-747b-43f7-ac18-576cd183c0c2" satisfied condition "Succeeded or Failed" +Oct 27 15:44:40.622: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod downwardapi-volume-01f2aa8e-747b-43f7-ac18-576cd183c0c2 container client-container: +STEP: delete the pod +Oct 27 15:44:40.734: INFO: Waiting for pod downwardapi-volume-01f2aa8e-747b-43f7-ac18-576cd183c0c2 to disappear +Oct 27 15:44:40.744: INFO: Pod downwardapi-volume-01f2aa8e-747b-43f7-ac18-576cd183c0c2 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:44:40.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-1537" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":320,"skipped":5638,"failed":0} + +------------------------------ +[sig-api-machinery] Namespaces [Serial] + should patch a Namespace [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:44:40.779: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename namespaces +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in namespaces-7470 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should patch a Namespace [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a Namespace +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nspatchtest-a07dacbd-1db9-42be-86ea-5bb88c74f494-7780 +STEP: patching the Namespace +STEP: get the Namespace and ensuring it has the label +[AfterEach] [sig-api-machinery] Namespaces [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:44:41.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "namespaces-7470" for this suite. +STEP: Destroying namespace "nspatchtest-a07dacbd-1db9-42be-86ea-5bb88c74f494-7780" for this suite. +•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":346,"completed":321,"skipped":5638,"failed":0} +SSS +------------------------------ +[sig-apps] DisruptionController + should update/patch PodDisruptionBudget status [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:44:41.190: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename disruption +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in disruption-9670 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 +[It] should update/patch PodDisruptionBudget status [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Waiting for the pdb to be processed +STEP: Updating PodDisruptionBudget status +STEP: Waiting for all pods to be running +Oct 27 15:44:41.423: INFO: running pods: 0 < 1 +STEP: locating a running pod +STEP: Waiting for the pdb to be processed +STEP: Patching PodDisruptionBudget status +STEP: Waiting for the pdb to be processed +[AfterEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:44:43.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "disruption-9670" for this suite. +•{"msg":"PASSED [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]","total":346,"completed":322,"skipped":5641,"failed":0} +SS +------------------------------ +[sig-node] Probing container + should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:44:43.563: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-probe +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-5657 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 +[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating pod busybox-f52d300f-054f-4127-9df9-be3832651f5f in namespace container-probe-5657 +Oct 27 15:44:47.788: INFO: Started pod busybox-f52d300f-054f-4127-9df9-be3832651f5f in namespace container-probe-5657 +STEP: checking the pod's current state and verifying that restartCount is present +Oct 27 15:44:47.799: INFO: Initial restart count of pod busybox-f52d300f-054f-4127-9df9-be3832651f5f is 0 +Oct 27 15:45:36.205: INFO: Restart count of pod container-probe-5657/busybox-f52d300f-054f-4127-9df9-be3832651f5f is now 1 (48.405904471s elapsed) +STEP: deleting the pod +[AfterEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:45:36.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-5657" for this suite. +•{"msg":"PASSED [sig-node] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":346,"completed":323,"skipped":5643,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-instrumentation] Events + should ensure that an event can be fetched, patched, deleted, and listed [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-instrumentation] Events + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:45:36.257: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename events +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in events-6068 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a test event +STEP: listing all events in all namespaces +STEP: patching the test event +STEP: fetching the test event +STEP: deleting the test event +STEP: listing all events in all namespaces +[AfterEach] [sig-instrumentation] Events + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:45:36.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "events-6068" for this suite. +•{"msg":"PASSED [sig-instrumentation] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":346,"completed":324,"skipped":5653,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-storage] Secrets + should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:45:36.543: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-1590 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secret-namespace-2926 +STEP: Creating secret with name secret-test-bd9909fa-377b-41fc-be11-aeca6dbf39a2 +STEP: Creating a pod to test consume secrets +Oct 27 15:45:36.931: INFO: Waiting up to 5m0s for pod "pod-secrets-0c1d6ee2-4097-4c65-b482-fa3bebcde595" in namespace "secrets-1590" to be "Succeeded or Failed" +Oct 27 15:45:36.942: INFO: Pod "pod-secrets-0c1d6ee2-4097-4c65-b482-fa3bebcde595": Phase="Pending", Reason="", readiness=false. Elapsed: 11.508534ms +Oct 27 15:45:38.955: INFO: Pod "pod-secrets-0c1d6ee2-4097-4c65-b482-fa3bebcde595": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02434295s +Oct 27 15:45:40.969: INFO: Pod "pod-secrets-0c1d6ee2-4097-4c65-b482-fa3bebcde595": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037562321s +STEP: Saw pod success +Oct 27 15:45:40.969: INFO: Pod "pod-secrets-0c1d6ee2-4097-4c65-b482-fa3bebcde595" satisfied condition "Succeeded or Failed" +Oct 27 15:45:40.980: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod pod-secrets-0c1d6ee2-4097-4c65-b482-fa3bebcde595 container secret-volume-test: +STEP: delete the pod +Oct 27 15:45:41.090: INFO: Waiting for pod pod-secrets-0c1d6ee2-4097-4c65-b482-fa3bebcde595 to disappear +Oct 27 15:45:41.101: INFO: Pod pod-secrets-0c1d6ee2-4097-4c65-b482-fa3bebcde595 no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:45:41.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-1590" for this suite. +STEP: Destroying namespace "secret-namespace-2926" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":346,"completed":325,"skipped":5664,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl expose + should create services for rc [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:45:41.149: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-8757 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should create services for rc [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating Agnhost RC +Oct 27 15:45:41.335: INFO: namespace kubectl-8757 +Oct 27 15:45:41.335: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-8757 create -f -' +Oct 27 15:45:41.862: INFO: stderr: "" +Oct 27 15:45:41.862: INFO: stdout: "replicationcontroller/agnhost-primary created\n" +STEP: Waiting for Agnhost primary to start. +Oct 27 15:45:42.875: INFO: Selector matched 1 pods for map[app:agnhost] +Oct 27 15:45:42.875: INFO: Found 0 / 1 +Oct 27 15:45:43.875: INFO: Selector matched 1 pods for map[app:agnhost] +Oct 27 15:45:43.875: INFO: Found 0 / 1 +Oct 27 15:45:44.875: INFO: Selector matched 1 pods for map[app:agnhost] +Oct 27 15:45:44.875: INFO: Found 1 / 1 +Oct 27 15:45:44.875: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 +Oct 27 15:45:44.886: INFO: Selector matched 1 pods for map[app:agnhost] +Oct 27 15:45:44.886: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +Oct 27 15:45:44.886: INFO: wait on agnhost-primary startup in kubectl-8757 +Oct 27 15:45:44.886: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-8757 logs agnhost-primary-dl2bz agnhost-primary' +Oct 27 15:45:45.047: INFO: stderr: "" +Oct 27 15:45:45.047: INFO: stdout: "Paused\n" +STEP: exposing RC +Oct 27 15:45:45.047: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-8757 expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379' +Oct 27 15:45:45.170: INFO: stderr: "" +Oct 27 15:45:45.170: INFO: stdout: "service/rm2 exposed\n" +Oct 27 15:45:45.182: INFO: Service rm2 in namespace kubectl-8757 found. +STEP: exposing service +Oct 27 15:45:47.208: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-8757 expose service rm2 --name=rm3 --port=2345 --target-port=6379' +Oct 27 15:45:47.324: INFO: stderr: "" +Oct 27 15:45:47.324: INFO: stdout: "service/rm3 exposed\n" +Oct 27 15:45:47.334: INFO: Service rm3 in namespace kubectl-8757 found. +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:45:49.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-8757" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":346,"completed":326,"skipped":5695,"failed":0} +S +------------------------------ +[sig-node] InitContainer [NodeConformance] + should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:45:49.393: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename init-container +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in init-container-8530 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 +[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pod +Oct 27 15:45:49.579: INFO: PodSpec: initContainers in spec.initContainers +[AfterEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:45:53.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "init-container-8530" for this suite. +•{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":346,"completed":327,"skipped":5696,"failed":0} + +------------------------------ +[sig-node] Probing container + should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:45:53.448: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-probe +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-3659 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 +[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating pod test-webserver-8f533030-c14b-4bbd-b63d-35b958e909c0 in namespace container-probe-3659 +Oct 27 15:45:57.689: INFO: Started pod test-webserver-8f533030-c14b-4bbd-b63d-35b958e909c0 in namespace container-probe-3659 +STEP: checking the pod's current state and verifying that restartCount is present +Oct 27 15:45:57.700: INFO: Initial restart count of pod test-webserver-8f533030-c14b-4bbd-b63d-35b958e909c0 is 0 +STEP: deleting the pod +[AfterEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:49:59.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-3659" for this suite. +•{"msg":"PASSED [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":346,"completed":328,"skipped":5696,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should find a service from listing all namespaces [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:49:59.304: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-2602 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should find a service from listing all namespaces [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: fetching services +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:49:59.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-2602" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":346,"completed":329,"skipped":5711,"failed":0} +SSSSS +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:49:59.528: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-2854 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating secret with name secret-test-map-0721b667-224b-4492-b1a0-cd7ebf59f7fe +STEP: Creating a pod to test consume secrets +Oct 27 15:49:59.754: INFO: Waiting up to 5m0s for pod "pod-secrets-2785d497-3406-44bb-abb7-4ea0a9d70c25" in namespace "secrets-2854" to be "Succeeded or Failed" +Oct 27 15:49:59.765: INFO: Pod "pod-secrets-2785d497-3406-44bb-abb7-4ea0a9d70c25": Phase="Pending", Reason="", readiness=false. Elapsed: 10.662454ms +Oct 27 15:50:01.812: INFO: Pod "pod-secrets-2785d497-3406-44bb-abb7-4ea0a9d70c25": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057923526s +Oct 27 15:50:03.825: INFO: Pod "pod-secrets-2785d497-3406-44bb-abb7-4ea0a9d70c25": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.070639985s +STEP: Saw pod success +Oct 27 15:50:03.825: INFO: Pod "pod-secrets-2785d497-3406-44bb-abb7-4ea0a9d70c25" satisfied condition "Succeeded or Failed" +Oct 27 15:50:03.836: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod pod-secrets-2785d497-3406-44bb-abb7-4ea0a9d70c25 container secret-volume-test: +STEP: delete the pod +Oct 27 15:50:03.945: INFO: Waiting for pod pod-secrets-2785d497-3406-44bb-abb7-4ea0a9d70c25 to disappear +Oct 27 15:50:03.955: INFO: Pod pod-secrets-2785d497-3406-44bb-abb7-4ea0a9d70c25 no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:50:03.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-2854" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":330,"skipped":5716,"failed":0} +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:50:04.034: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-2529 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name projected-configmap-test-volume-64314496-85e8-4ccd-8d15-edb7ca108b09 +STEP: Creating a pod to test consume configMaps +Oct 27 15:50:04.252: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ae5e3aeb-2012-44fa-bba2-88749472c9cd" in namespace "projected-2529" to be "Succeeded or Failed" +Oct 27 15:50:04.263: INFO: Pod "pod-projected-configmaps-ae5e3aeb-2012-44fa-bba2-88749472c9cd": Phase="Pending", Reason="", readiness=false. Elapsed: 11.491707ms +Oct 27 15:50:06.276: INFO: Pod "pod-projected-configmaps-ae5e3aeb-2012-44fa-bba2-88749472c9cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.024558893s +STEP: Saw pod success +Oct 27 15:50:06.276: INFO: Pod "pod-projected-configmaps-ae5e3aeb-2012-44fa-bba2-88749472c9cd" satisfied condition "Succeeded or Failed" +Oct 27 15:50:06.288: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod pod-projected-configmaps-ae5e3aeb-2012-44fa-bba2-88749472c9cd container agnhost-container: +STEP: delete the pod +Oct 27 15:50:06.398: INFO: Waiting for pod pod-projected-configmaps-ae5e3aeb-2012-44fa-bba2-88749472c9cd to disappear +Oct 27 15:50:06.410: INFO: Pod pod-projected-configmaps-ae5e3aeb-2012-44fa-bba2-88749472c9cd no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:50:06.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-2529" for this suite. +•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":346,"completed":331,"skipped":5735,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:50:06.443: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-3223 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0777 on node default medium +Oct 27 15:50:06.646: INFO: Waiting up to 5m0s for pod "pod-27ed5a4b-cb27-4e5d-95cd-54bfc4df28a7" in namespace "emptydir-3223" to be "Succeeded or Failed" +Oct 27 15:50:06.660: INFO: Pod "pod-27ed5a4b-cb27-4e5d-95cd-54bfc4df28a7": Phase="Pending", Reason="", readiness=false. Elapsed: 14.238058ms +Oct 27 15:50:08.673: INFO: Pod "pod-27ed5a4b-cb27-4e5d-95cd-54bfc4df28a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026694727s +Oct 27 15:50:10.688: INFO: Pod "pod-27ed5a4b-cb27-4e5d-95cd-54bfc4df28a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041939452s +STEP: Saw pod success +Oct 27 15:50:10.688: INFO: Pod "pod-27ed5a4b-cb27-4e5d-95cd-54bfc4df28a7" satisfied condition "Succeeded or Failed" +Oct 27 15:50:10.701: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod pod-27ed5a4b-cb27-4e5d-95cd-54bfc4df28a7 container test-container: +STEP: delete the pod +Oct 27 15:50:10.766: INFO: Waiting for pod pod-27ed5a4b-cb27-4e5d-95cd-54bfc4df28a7 to disappear +Oct 27 15:50:10.778: INFO: Pod pod-27ed5a4b-cb27-4e5d-95cd-54bfc4df28a7 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:50:10.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-3223" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":332,"skipped":5766,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for CRD without validation schema [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:50:10.811: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-4487 +STEP: Waiting for a default service account to be provisioned in namespace +[It] works for CRD without validation schema [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:50:10.997: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: client-side validation (kubectl create and apply) allows request with any unknown properties +Oct 27 15:50:14.763: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-4487 --namespace=crd-publish-openapi-4487 create -f -' +Oct 27 15:50:15.372: INFO: stderr: "" +Oct 27 15:50:15.372: INFO: stdout: "e2e-test-crd-publish-openapi-6249-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" +Oct 27 15:50:15.372: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-4487 --namespace=crd-publish-openapi-4487 delete e2e-test-crd-publish-openapi-6249-crds test-cr' +Oct 27 15:50:15.486: INFO: stderr: "" +Oct 27 15:50:15.486: INFO: stdout: "e2e-test-crd-publish-openapi-6249-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" +Oct 27 15:50:15.486: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-4487 --namespace=crd-publish-openapi-4487 apply -f -' +Oct 27 15:50:15.734: INFO: stderr: "" +Oct 27 15:50:15.734: INFO: stdout: "e2e-test-crd-publish-openapi-6249-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" +Oct 27 15:50:15.734: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-4487 --namespace=crd-publish-openapi-4487 delete e2e-test-crd-publish-openapi-6249-crds test-cr' +Oct 27 15:50:15.850: INFO: stderr: "" +Oct 27 15:50:15.850: INFO: stdout: "e2e-test-crd-publish-openapi-6249-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" +STEP: kubectl explain works to explain CR without validation schema +Oct 27 15:50:15.850: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmgxs-skc.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-4487 explain e2e-test-crd-publish-openapi-6249-crds' +Oct 27 15:50:16.043: INFO: stderr: "" +Oct 27 15:50:16.043: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-6249-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:50:20.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-4487" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":346,"completed":333,"skipped":5796,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-network] EndpointSlice + should have Endpoints and EndpointSlices pointing to API Server [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:50:20.292: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename endpointslice +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in endpointslice-2208 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 +[It] should have Endpoints and EndpointSlices pointing to API Server [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:50:20.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "endpointslice-2208" for this suite. +•{"msg":"PASSED [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","total":346,"completed":334,"skipped":5810,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-node] Variable Expansion + should allow substituting values in a volume subpath [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:50:20.536: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename var-expansion +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in var-expansion-7831 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should allow substituting values in a volume subpath [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test substitution in volume subpath +Oct 27 15:50:20.742: INFO: Waiting up to 5m0s for pod "var-expansion-deb5e383-417c-4b94-ad2a-3fc84b404182" in namespace "var-expansion-7831" to be "Succeeded or Failed" +Oct 27 15:50:20.753: INFO: Pod "var-expansion-deb5e383-417c-4b94-ad2a-3fc84b404182": Phase="Pending", Reason="", readiness=false. Elapsed: 10.928924ms +Oct 27 15:50:22.766: INFO: Pod "var-expansion-deb5e383-417c-4b94-ad2a-3fc84b404182": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023766084s +Oct 27 15:50:24.779: INFO: Pod "var-expansion-deb5e383-417c-4b94-ad2a-3fc84b404182": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036734689s +STEP: Saw pod success +Oct 27 15:50:24.779: INFO: Pod "var-expansion-deb5e383-417c-4b94-ad2a-3fc84b404182" satisfied condition "Succeeded or Failed" +Oct 27 15:50:24.791: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod var-expansion-deb5e383-417c-4b94-ad2a-3fc84b404182 container dapi-container: +STEP: delete the pod +Oct 27 15:50:24.862: INFO: Waiting for pod var-expansion-deb5e383-417c-4b94-ad2a-3fc84b404182 to disappear +Oct 27 15:50:24.873: INFO: Pod var-expansion-deb5e383-417c-4b94-ad2a-3fc84b404182 no longer exists +[AfterEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:50:24.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-7831" for this suite. +•{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]","total":346,"completed":335,"skipped":5822,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + should perform canary updates and phased rolling updates of template modifications [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:50:24.906: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename statefulset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-9462 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:92 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:107 +STEP: Creating service test in namespace statefulset-9462 +[It] should perform canary updates and phased rolling updates of template modifications [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a new StatefulSet +Oct 27 15:50:25.121: INFO: Found 0 stateful pods, waiting for 3 +Oct 27 15:50:35.135: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true +Oct 27 15:50:35.135: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true +Oct 27 15:50:35.135: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true +STEP: Updating stateful set template: update image from k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 to k8s.gcr.io/e2e-test-images/httpd:2.4.39-1 +Oct 27 15:50:35.203: INFO: Updating stateful set ss2 +STEP: Creating a new revision +STEP: Not applying an update when the partition is greater than the number of replicas +STEP: Performing a canary update +Oct 27 15:50:35.260: INFO: Updating stateful set ss2 +Oct 27 15:50:35.285: INFO: Waiting for Pod statefulset-9462/ss2-2 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 +STEP: Restoring Pods to the correct revision when they are deleted +Oct 27 15:50:45.360: INFO: Found 2 stateful pods, waiting for 3 +Oct 27 15:50:55.374: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true +Oct 27 15:50:55.374: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true +Oct 27 15:50:55.374: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true +STEP: Performing a phased rolling update +Oct 27 15:50:55.431: INFO: Updating stateful set ss2 +Oct 27 15:50:55.455: INFO: Waiting for Pod statefulset-9462/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 +Oct 27 15:51:05.526: INFO: Updating stateful set ss2 +Oct 27 15:51:05.551: INFO: Waiting for StatefulSet statefulset-9462/ss2 to complete update +Oct 27 15:51:05.552: INFO: Waiting for Pod statefulset-9462/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118 +Oct 27 15:51:15.578: INFO: Deleting all statefulset in ns statefulset-9462 +Oct 27 15:51:15.590: INFO: Scaling statefulset ss2 to 0 +Oct 27 15:51:25.640: INFO: Waiting for statefulset status.replicas updated to 0 +Oct 27 15:51:25.652: INFO: Deleting statefulset ss2 +[AfterEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:51:25.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-9462" for this suite. +•{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":346,"completed":336,"skipped":5857,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should verify ResourceQuota with terminating scopes. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:51:25.720: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename resourcequota +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-2769 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should verify ResourceQuota with terminating scopes. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a ResourceQuota with terminating scope +STEP: Ensuring ResourceQuota status is calculated +STEP: Creating a ResourceQuota with not terminating scope +STEP: Ensuring ResourceQuota status is calculated +STEP: Creating a long running pod +STEP: Ensuring resource quota with not terminating scope captures the pod usage +STEP: Ensuring resource quota with terminating scope ignored the pod usage +STEP: Deleting the pod +STEP: Ensuring resource quota status released the pod usage +STEP: Creating a terminating pod +STEP: Ensuring resource quota with terminating scope captures the pod usage +STEP: Ensuring resource quota with not terminating scope ignored the pod usage +STEP: Deleting the pod +STEP: Ensuring resource quota status released the pod usage +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:51:42.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-2769" for this suite. +•{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":346,"completed":337,"skipped":5917,"failed":0} +SS +------------------------------ +[sig-scheduling] SchedulerPredicates [Serial] + validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:51:42.156: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename sched-pred +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-pred-3081 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 +Oct 27 15:51:42.343: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready +Oct 27 15:51:42.369: INFO: Waiting for terminating namespaces to be deleted... +Oct 27 15:51:42.380: INFO: +Logging pods the apiserver thinks is on node shoot--it--tmgxs-skc-worker-1-5f9b7-6zrq8 before test +Oct 27 15:51:42.398: INFO: addons-nginx-ingress-controller-76f55b7b5f-ffxv8 from kube-system started at 2021-10-27 14:09:38 +0000 UTC (1 container statuses recorded) +Oct 27 15:51:42.398: INFO: Container nginx-ingress-controller ready: true, restart count 0 +Oct 27 15:51:42.398: INFO: addons-nginx-ingress-nginx-ingress-k8s-backend-56d9d84c8c-w2blg from kube-system started at 2021-10-27 13:56:51 +0000 UTC (1 container statuses recorded) +Oct 27 15:51:42.398: INFO: Container nginx-ingress-nginx-ingress-k8s-backend ready: true, restart count 0 +Oct 27 15:51:42.398: INFO: apiserver-proxy-vdnm2 from kube-system started at 2021-10-27 13:56:14 +0000 UTC (2 container statuses recorded) +Oct 27 15:51:42.398: INFO: Container proxy ready: true, restart count 0 +Oct 27 15:51:42.398: INFO: Container sidecar ready: true, restart count 0 +Oct 27 15:51:42.398: INFO: calico-node-bmkxt from kube-system started at 2021-10-27 14:03:54 +0000 UTC (1 container statuses recorded) +Oct 27 15:51:42.398: INFO: Container calico-node ready: true, restart count 0 +Oct 27 15:51:42.398: INFO: calico-node-vertical-autoscaler-785b5f968-sbxt6 from kube-system started at 2021-10-27 13:56:51 +0000 UTC (1 container statuses recorded) +Oct 27 15:51:42.398: INFO: Container autoscaler ready: true, restart count 0 +Oct 27 15:51:42.398: INFO: calico-typha-deploy-546b97d4b5-kw64w from kube-system started at 2021-10-27 13:56:14 +0000 UTC (1 container statuses recorded) +Oct 27 15:51:42.398: INFO: Container calico-typha ready: true, restart count 0 +Oct 27 15:51:42.398: INFO: calico-typha-horizontal-autoscaler-5b58bb446c-p96rk from kube-system started at 2021-10-27 13:56:51 +0000 UTC (1 container statuses recorded) +Oct 27 15:51:42.398: INFO: Container autoscaler ready: true, restart count 0 +Oct 27 15:51:42.398: INFO: calico-typha-vertical-autoscaler-5c9655cddd-z7tgn from kube-system started at 2021-10-27 13:56:51 +0000 UTC (1 container statuses recorded) +Oct 27 15:51:42.398: INFO: Container autoscaler ready: true, restart count 0 +Oct 27 15:51:42.398: INFO: coredns-7649bdf444-cnjp5 from kube-system started at 2021-10-27 13:56:51 +0000 UTC (1 container statuses recorded) +Oct 27 15:51:42.398: INFO: Container coredns ready: true, restart count 0 +Oct 27 15:51:42.398: INFO: coredns-7649bdf444-x6nkv from kube-system started at 2021-10-27 13:56:51 +0000 UTC (1 container statuses recorded) +Oct 27 15:51:42.398: INFO: Container coredns ready: true, restart count 0 +Oct 27 15:51:42.398: INFO: csi-driver-node-disk-tb5lc from kube-system started at 2021-10-27 13:56:14 +0000 UTC (3 container statuses recorded) +Oct 27 15:51:42.398: INFO: Container csi-driver ready: true, restart count 0 +Oct 27 15:51:42.398: INFO: Container csi-liveness-probe ready: true, restart count 0 +Oct 27 15:51:42.398: INFO: Container csi-node-driver-registrar ready: true, restart count 0 +Oct 27 15:51:42.398: INFO: csi-driver-node-file-8vk78 from kube-system started at 2021-10-27 13:56:14 +0000 UTC (3 container statuses recorded) +Oct 27 15:51:42.398: INFO: Container csi-driver ready: true, restart count 0 +Oct 27 15:51:42.398: INFO: Container csi-liveness-probe ready: true, restart count 0 +Oct 27 15:51:42.398: INFO: Container csi-node-driver-registrar ready: true, restart count 0 +Oct 27 15:51:42.398: INFO: kube-proxy-7d5xq from kube-system started at 2021-10-27 14:56:47 +0000 UTC (2 container statuses recorded) +Oct 27 15:51:42.398: INFO: Container conntrack-fix ready: true, restart count 0 +Oct 27 15:51:42.398: INFO: Container kube-proxy ready: true, restart count 0 +Oct 27 15:51:42.398: INFO: metrics-server-5555d7587-mw896 from kube-system started at 2021-10-27 13:56:51 +0000 UTC (1 container statuses recorded) +Oct 27 15:51:42.398: INFO: Container metrics-server ready: true, restart count 0 +Oct 27 15:51:42.398: INFO: node-exporter-fg8qw from kube-system started at 2021-10-27 13:56:14 +0000 UTC (1 container statuses recorded) +Oct 27 15:51:42.398: INFO: Container node-exporter ready: true, restart count 0 +Oct 27 15:51:42.398: INFO: node-problem-detector-bxt7r from kube-system started at 2021-10-27 14:07:47 +0000 UTC (1 container statuses recorded) +Oct 27 15:51:42.398: INFO: Container node-problem-detector ready: true, restart count 0 +Oct 27 15:51:42.398: INFO: vpn-shoot-7f6446d489-9kghs from kube-system started at 2021-10-27 13:56:51 +0000 UTC (1 container statuses recorded) +Oct 27 15:51:42.398: INFO: Container vpn-shoot ready: true, restart count 0 +Oct 27 15:51:42.398: INFO: dashboard-metrics-scraper-7ccbfc448f-jcrjk from kubernetes-dashboard started at 2021-10-27 13:56:51 +0000 UTC (1 container statuses recorded) +Oct 27 15:51:42.398: INFO: Container dashboard-metrics-scraper ready: true, restart count 0 +Oct 27 15:51:42.398: INFO: kubernetes-dashboard-65d5f5c55-sf9qc from kubernetes-dashboard started at 2021-10-27 13:56:51 +0000 UTC (1 container statuses recorded) +Oct 27 15:51:42.398: INFO: Container kubernetes-dashboard ready: true, restart count 2 +Oct 27 15:51:42.398: INFO: +Logging pods the apiserver thinks is on node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 before test +Oct 27 15:51:42.424: INFO: apiserver-proxy-8bg6p from kube-system started at 2021-10-27 13:56:32 +0000 UTC (2 container statuses recorded) +Oct 27 15:51:42.424: INFO: Container proxy ready: true, restart count 0 +Oct 27 15:51:42.424: INFO: Container sidecar ready: true, restart count 0 +Oct 27 15:51:42.424: INFO: blackbox-exporter-65c549b94c-vc8rp from kube-system started at 2021-10-27 14:08:45 +0000 UTC (1 container statuses recorded) +Oct 27 15:51:42.424: INFO: Container blackbox-exporter ready: true, restart count 0 +Oct 27 15:51:42.424: INFO: calico-node-v56vf from kube-system started at 2021-10-27 14:03:54 +0000 UTC (1 container statuses recorded) +Oct 27 15:51:42.424: INFO: Container calico-node ready: true, restart count 0 +Oct 27 15:51:42.424: INFO: csi-driver-node-disk-h74nf from kube-system started at 2021-10-27 13:56:32 +0000 UTC (3 container statuses recorded) +Oct 27 15:51:42.424: INFO: Container csi-driver ready: true, restart count 0 +Oct 27 15:51:42.424: INFO: Container csi-liveness-probe ready: true, restart count 0 +Oct 27 15:51:42.424: INFO: Container csi-node-driver-registrar ready: true, restart count 0 +Oct 27 15:51:42.424: INFO: csi-driver-node-file-q9zq2 from kube-system started at 2021-10-27 13:56:32 +0000 UTC (3 container statuses recorded) +Oct 27 15:51:42.424: INFO: Container csi-driver ready: true, restart count 0 +Oct 27 15:51:42.424: INFO: Container csi-liveness-probe ready: true, restart count 0 +Oct 27 15:51:42.424: INFO: Container csi-node-driver-registrar ready: true, restart count 0 +Oct 27 15:51:42.424: INFO: kube-proxy-mlg7s from kube-system started at 2021-10-27 14:56:47 +0000 UTC (2 container statuses recorded) +Oct 27 15:51:42.424: INFO: Container conntrack-fix ready: true, restart count 0 +Oct 27 15:51:42.424: INFO: Container kube-proxy ready: true, restart count 0 +Oct 27 15:51:42.424: INFO: node-exporter-fs6fl from kube-system started at 2021-10-27 13:56:32 +0000 UTC (1 container statuses recorded) +Oct 27 15:51:42.424: INFO: Container node-exporter ready: true, restart count 0 +Oct 27 15:51:42.424: INFO: node-problem-detector-srvcj from kube-system started at 2021-10-27 14:07:47 +0000 UTC (1 container statuses recorded) +Oct 27 15:51:42.424: INFO: Container node-problem-detector ready: true, restart count 0 +[It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Trying to launch a pod without a label to get a node which can launch it. +STEP: Explicitly delete pod here to free the resource it takes. +STEP: Trying to apply a random label on the found node. +STEP: verifying the node has the label kubernetes.io/e2e-93d8aedb-13e8-40f3-a489-bdd3832f9558 95 +STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled +STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 10.250.0.4 on the node which pod4 resides and expect not scheduled +STEP: removing the label kubernetes.io/e2e-93d8aedb-13e8-40f3-a489-bdd3832f9558 off the node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 +STEP: verifying the node doesn't have the label kubernetes.io/e2e-93d8aedb-13e8-40f3-a489-bdd3832f9558 +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:56:50.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-pred-3081" for this suite. +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 + +• [SLOW TEST:308.556 seconds] +[sig-scheduling] SchedulerPredicates [Serial] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 + validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":346,"completed":338,"skipped":5919,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicationController + should surface a failure condition on a common issue like exceeded quota [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:56:50.713: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename replication-controller +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replication-controller-548 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 +[It] should surface a failure condition on a common issue like exceeded quota [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:56:50.900: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace +STEP: Creating rc "condition-test" that asks for more than the allowed pod quota +STEP: Checking rc "condition-test" has the desired failure condition set +STEP: Scaling down rc "condition-test" to satisfy pod quota +Oct 27 15:56:51.981: INFO: Updating replication controller "condition-test" +STEP: Checking rc "condition-test" has no failure condition set +[AfterEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:56:51.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replication-controller-548" for this suite. +•{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":346,"completed":339,"skipped":5942,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] CronJob + should replace jobs when ReplaceConcurrent [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:56:52.030: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename cronjob +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in cronjob-1921 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should replace jobs when ReplaceConcurrent [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a ReplaceConcurrent cronjob +STEP: Ensuring a job is scheduled +STEP: Ensuring exactly one is scheduled +STEP: Ensuring exactly one running job exists by listing jobs explicitly +STEP: Ensuring the job is replaced with a new one +STEP: Removing cronjob +[AfterEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:58:00.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "cronjob-1921" for this suite. +•{"msg":"PASSED [sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","total":346,"completed":340,"skipped":5965,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Docker Containers + should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Docker Containers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:58:00.343: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename containers +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in containers-1314 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test override command +Oct 27 15:58:00.548: INFO: Waiting up to 5m0s for pod "client-containers-f89ac38b-839e-425e-80e2-51e1d976efdd" in namespace "containers-1314" to be "Succeeded or Failed" +Oct 27 15:58:00.560: INFO: Pod "client-containers-f89ac38b-839e-425e-80e2-51e1d976efdd": Phase="Pending", Reason="", readiness=false. Elapsed: 12.214516ms +Oct 27 15:58:02.611: INFO: Pod "client-containers-f89ac38b-839e-425e-80e2-51e1d976efdd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063714377s +Oct 27 15:58:04.624: INFO: Pod "client-containers-f89ac38b-839e-425e-80e2-51e1d976efdd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.076567333s +STEP: Saw pod success +Oct 27 15:58:04.624: INFO: Pod "client-containers-f89ac38b-839e-425e-80e2-51e1d976efdd" satisfied condition "Succeeded or Failed" +Oct 27 15:58:04.635: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod client-containers-f89ac38b-839e-425e-80e2-51e1d976efdd container agnhost-container: +STEP: delete the pod +Oct 27 15:58:04.744: INFO: Waiting for pod client-containers-f89ac38b-839e-425e-80e2-51e1d976efdd to disappear +Oct 27 15:58:04.755: INFO: Pod client-containers-f89ac38b-839e-425e-80e2-51e1d976efdd no longer exists +[AfterEach] [sig-node] Docker Containers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:58:04.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "containers-1314" for this suite. +•{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":346,"completed":341,"skipped":6000,"failed":0} +S +------------------------------ +[sig-node] ConfigMap + should run through a ConfigMap lifecycle [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:58:04.789: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-3 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should run through a ConfigMap lifecycle [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a ConfigMap +STEP: fetching the ConfigMap +STEP: patching the ConfigMap +STEP: listing all ConfigMaps in all namespaces with a label selector +STEP: deleting the ConfigMap by collection with a label selector +STEP: listing all ConfigMaps in test namespace +[AfterEach] [sig-node] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:58:05.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-3" for this suite. +•{"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":346,"completed":342,"skipped":6001,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-network] Ingress API + should support creating Ingress API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Ingress API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:58:05.075: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename ingress +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in ingress-7879 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support creating Ingress API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: getting /apis +STEP: getting /apis/networking.k8s.io +STEP: getting /apis/networking.k8s.iov1 +STEP: creating +STEP: getting +STEP: listing +STEP: watching +Oct 27 15:58:05.351: INFO: starting watch +STEP: cluster-wide listing +STEP: cluster-wide watching +Oct 27 15:58:05.371: INFO: starting watch +STEP: patching +STEP: updating +Oct 27 15:58:05.418: INFO: waiting for watch events with expected annotations +Oct 27 15:58:05.418: INFO: saw patched and updated annotations +STEP: patching /status +STEP: updating /status +STEP: get /status +STEP: deleting +STEP: deleting a collection +[AfterEach] [sig-network] Ingress API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:58:05.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "ingress-7879" for this suite. +•{"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":346,"completed":343,"skipped":6011,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:58:05.555: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-4004 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name projected-configmap-test-volume-2da2ef3b-934a-4841-9ce4-9b3157c73000 +STEP: Creating a pod to test consume configMaps +Oct 27 15:58:05.767: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2e191d6a-8e0e-43e6-a8d8-775fe7ad9f40" in namespace "projected-4004" to be "Succeeded or Failed" +Oct 27 15:58:05.778: INFO: Pod "pod-projected-configmaps-2e191d6a-8e0e-43e6-a8d8-775fe7ad9f40": Phase="Pending", Reason="", readiness=false. Elapsed: 10.663687ms +Oct 27 15:58:07.790: INFO: Pod "pod-projected-configmaps-2e191d6a-8e0e-43e6-a8d8-775fe7ad9f40": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.022491833s +STEP: Saw pod success +Oct 27 15:58:07.790: INFO: Pod "pod-projected-configmaps-2e191d6a-8e0e-43e6-a8d8-775fe7ad9f40" satisfied condition "Succeeded or Failed" +Oct 27 15:58:07.801: INFO: Trying to get logs from node shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 pod pod-projected-configmaps-2e191d6a-8e0e-43e6-a8d8-775fe7ad9f40 container agnhost-container: +STEP: delete the pod +Oct 27 15:58:07.923: INFO: Waiting for pod pod-projected-configmaps-2e191d6a-8e0e-43e6-a8d8-775fe7ad9f40 to disappear +Oct 27 15:58:07.934: INFO: Pod pod-projected-configmaps-2e191d6a-8e0e-43e6-a8d8-775fe7ad9f40 no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:58:07.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-4004" for this suite. +•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":346,"completed":344,"skipped":6048,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-apps] DisruptionController + should observe PodDisruptionBudget status updated [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:58:07.966: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename disruption +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in disruption-7784 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 +[It] should observe PodDisruptionBudget status updated [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Waiting for the pdb to be processed +STEP: Waiting for all pods to be running +Oct 27 15:58:08.241: INFO: running pods: 0 < 3 +Oct 27 15:58:10.253: INFO: running pods: 0 < 3 +[AfterEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:58:12.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "disruption-7784" for this suite. +•{"msg":"PASSED [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]","total":346,"completed":345,"skipped":6058,"failed":0} +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath + runs ReplicaSets to verify preemption running path [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:58:12.298: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename sched-preemption +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-preemption-4350 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 +Oct 27 15:58:12.512: INFO: Waiting up to 1m0s for all nodes to be ready +Oct 27 15:59:12.622: INFO: Waiting for terminating namespaces to be deleted... +[BeforeEach] PreemptionExecutionPath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:59:12.633: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename sched-preemption-path +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-preemption-path-794 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] PreemptionExecutionPath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:488 +STEP: Finding an available node +STEP: Trying to launch a pod without a label to get a node which can launch it. +STEP: Explicitly delete pod here to free the resource it takes. +Oct 27 15:59:16.887: INFO: found a healthy node: shoot--it--tmgxs-skc-worker-1-5f9b7-txdf2 +[It] runs ReplicaSets to verify preemption running path [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:59:31.080: INFO: pods created so far: [1 1 1] +Oct 27 15:59:31.080: INFO: length of pods created so far: 3 +Oct 27 15:59:35.109: INFO: pods created so far: [2 2 1] +[AfterEach] PreemptionExecutionPath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:59:42.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-preemption-path-794" for this suite. +[AfterEach] PreemptionExecutionPath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:462 +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:59:42.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-preemption-4350" for this suite. +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 +•{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":346,"completed":346,"skipped":6076,"failed":0} +SSSSSSSSSSOct 27 15:59:42.301: INFO: Running AfterSuite actions on all nodes +Oct 27 15:59:42.301: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2 +Oct 27 15:59:42.301: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 +Oct 27 15:59:42.301: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2 +Oct 27 15:59:42.301: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 +Oct 27 15:59:42.301: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 +Oct 27 15:59:42.301: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 +Oct 27 15:59:42.301: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 +Oct 27 15:59:42.301: INFO: Running AfterSuite actions on node 1 +Oct 27 15:59:42.301: INFO: Skipping dumping logs from cluster + +JUnit report was created: /tmp/e2e/artifacts/1635343708/junit_01.xml +{"msg":"Test Suite completed","total":346,"completed":346,"skipped":6086,"failed":0} + +Ran 346 of 6432 Specs in 6670.556 seconds +SUCCESS! -- 346 Passed | 0 Failed | 0 Flaked | 0 Pending | 6086 Skipped +PASS + +Ginkgo ran 1 suite in 1h51m13.014168682s +Test Suite Passed diff --git a/v1.22/gardener-azure/junit_01.xml b/v1.22/gardener-azure/junit_01.xml new file mode 100644 index 0000000000..f63593c72e --- /dev/null +++ b/v1.22/gardener-azure/junit_01.xml @@ -0,0 +1,18607 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/v1.22/gardener-gcp/PRODUCT.yaml b/v1.22/gardener-gcp/PRODUCT.yaml new file mode 100644 index 0000000000..8781dd0266 --- /dev/null +++ b/v1.22/gardener-gcp/PRODUCT.yaml @@ -0,0 +1,9 @@ +vendor: SAP +name: Gardener (https://github.com/gardener/gardener) shoot cluster deployed on GCE +version: v1.34.0 +website_url: https://gardener.cloud +repo_url: https://github.com/gardener/ +documentation_url: https://github.com/gardener/documentation/wiki +product_logo_url: https://raw.githubusercontent.com/gardener/documentation/master/images/logo_w_saplogo.svg +type: installer +description: The Gardener implements automated management and operation of Kubernetes clusters as a service and aims to support that service on multiple Cloud providers. \ No newline at end of file diff --git a/v1.22/gardener-gcp/README.md b/v1.22/gardener-gcp/README.md new file mode 100644 index 0000000000..647dbcb2f7 --- /dev/null +++ b/v1.22/gardener-gcp/README.md @@ -0,0 +1,80 @@ +# Reproducing the test results: + +## Install Gardener on your Kubernetes Landscape +Check out https://github.com/gardener/garden-setup for a more detailed instruction and additional information. To install Gardener in your base cluster, a command line tool [sow](https://github.com/gardener/sow) is used. Use the provided Docker image that already contains `sow` and all required tools. To execute `sow` you call a [wrapper script](https://github.com/gardener/sow/tree/master/docker/bin) which starts `sow` in a Docker container (Docker will download the image from [eu.gcr.io/gardener-project/sow](http://eu.gcr.io/gardener-project/sow) if it is not available locally yet). Docker executes the sow command with the given arguments, and mounts parts of your file system into that container so that `sow` can read configuration files for the installation of Gardener components, and can persist the state of your installation. After `sow`'s execution Docker removes the container again. + +1. Clone the `sow` repository and add the path to our [wrapper script](https://github.com/gardener/sow/tree/master/docker/bin) to your `PATH` variable so you can call `sow` on the command line. + + ```bash + # setup for calling sow via the wrapper + git clone "https://github.com/gardener/sow" + cd sow + export PATH=$PATH:$PWD/docker/bin + ``` + +2. Create a directory `landscape` for your Gardener landscape and clone this repository into a subdirectory called `crop`: + + ```bash + cd .. + mkdir landscape + cd landscape + git clone "https://github.com/gardener/garden-setup" crop + ``` + +3. If you don't have your `kubekonfig` stored locally somewhere yet, download it. For example, for GKE you would use the following command: + + ```bash + gcloud container clusters get-credentials --zone --project + ``` + +4. Save your `kubeconfig` somewhere in your `landscape` directory. For the remaining steps we will assume that you saved it using file path `landscape/kubeconfig`. + +5. In your `landscape` directory, create a configuration file called `acre.yaml`. The structure of the configuration file is described [below](#configuration-file-acreyaml). Note that the relative file path `./kubeconfig` file must be specified in field `landscape.cluster.kubeconfig` in the configuration file. Checkout [configuration file acre](https://github.com/gardener/garden-setup#configuration-file-acreyaml) for configuration details. + + > Do not use file `acre.yaml` in directory `crop`. This file is used internally by the installation tool. + +6. If you created the base cluster using GKE convert your `kubeconfig` file to one that uses basic authentication with Google-specific configuration parameters: + + ```bash + sow convertkubeconfig + ``` + When asked for credentials, enter the ones that the GKE dashboard shows when clicking on `show credentials`. + + `sow` will replace the file specified in `landscape.cluster.kubeconfig` of your `acre.yaml` file by a kubeconfig file that uses basic authentication. + +7. In your first terminal window, use the following command to check in which order the components will be installed. Nothing will be deployed yet and you can test this way if your syntax in `acre.yaml` is correct: + + ```bash + sow order -A + ``` + +8. If there are no error messages, use the following command to deploy Gardener on your base cluster: + + ```bash + sow deploy -A + ``` + +9. `sow` now starts to install Gardener in your base cluster. The installation can take about 30 minutes. `sow` prints out status messages to the terminal window so that you can check the status of the installation. The other terminal window will show the newly created Kubernetes resources after a while and if their deployment was successful. Wait until the last component is deployed and all created Kubernetes resources are in status `Running`. + +10. Use the following command to find out the URL of the Gardener dashboard. + + ```bash + sow url + ``` + + +## Create Kubernetes Cluster + +Login to SAP Gardener Dashboard to create a Kubernetes Clusters on Amazon Web Services, Microsoft Azure, Google Cloud Platform, Alibaba Cloud, or OpenStack cloud provider. + +## Launch E2E Conformance Tests +Set the `KUBECONFIG` as path to the kubeconfig file of your newly created cluster (you can find the kubeconfig e.g. in the Gardener dashboard). Follow the instructions below to run the Kubernetes e2e conformance tests. Adjust values for arguments `k8sVersion` and `cloudprovider` respective to your new cluster. + +```bash +#first set KUBECONFIG to your cluster +docker run -ti -e --rm -v $KUBECONFIG:/mye2e/shoot.config golang:1.13 bash +# run all commands below within container +go get github.com/gardener/test-infra; cd /go/src/github.com/gardener/test-infra +export GO111MODULE=on; export E2E_EXPORT_PATH=/tmp/export; export KUBECONFIG=/mye2e/shoot.config; export GINKGO_PARALLEL=false +go run -mod=vendor ./integration-tests/e2e --k8sVersion=1.17.1 --cloudprovider=gcp --testcasegroup="conformance" +``` \ No newline at end of file diff --git a/v1.22/gardener-gcp/e2e.log b/v1.22/gardener-gcp/e2e.log new file mode 100644 index 0000000000..b323641369 --- /dev/null +++ b/v1.22/gardener-gcp/e2e.log @@ -0,0 +1,13791 @@ +Conformance test: not doing test setup. +I1027 14:03:06.837807 5683 e2e.go:129] Starting e2e run "5a0c32b1-2020-4b27-b9ec-04d5f89fa62f" on Ginkgo node 1 +{"msg":"Test Suite starting","total":346,"completed":0,"skipped":0,"failed":0} +Running Suite: Kubernetes e2e suite +=================================== +Random Seed: 1635343386 - Will randomize all specs +Will run 346 of 6432 specs + +Oct 27 14:03:08.910: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 14:03:08.912: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable +Oct 27 14:03:08.978: INFO: Waiting up to 10m0s for all pods (need at least 1) in namespace 'kube-system' to be running and ready +Oct 27 14:03:09.068: INFO: 24 / 24 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) +Oct 27 14:03:09.068: INFO: expected 12 pod replicas in namespace 'kube-system', 12 are Running and Ready. +Oct 27 14:03:09.068: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start +Oct 27 14:03:09.098: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'apiserver-proxy' (0 seconds elapsed) +Oct 27 14:03:09.098: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'calico-node' (0 seconds elapsed) +Oct 27 14:03:09.098: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'csi-driver-node' (0 seconds elapsed) +Oct 27 14:03:09.098: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) +Oct 27 14:03:09.098: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-exporter' (0 seconds elapsed) +Oct 27 14:03:09.098: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-problem-detector' (0 seconds elapsed) +Oct 27 14:03:09.098: INFO: e2e test version: v1.22.2 +Oct 27 14:03:09.107: INFO: kube-apiserver version: v1.22.2 +Oct 27 14:03:09.107: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 14:03:09.121: INFO: Cluster IP family: ipv4 +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Security Context + should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:03:09.121: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename security-context +W1027 14:03:09.184233 5683 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ +Oct 27 14:03:09.184: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled +Oct 27 14:03:09.204: INFO: PSP annotation exists on dry run pod: "extensions.gardener.cloud.provider-gcp.csi-driver-node"; assuming PodSecurityPolicy is enabled +W1027 14:03:09.215772 5683 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ +W1027 14:03:09.228681 5683 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ +Oct 27 14:03:09.262: INFO: Found ClusterRoles; assuming RBAC is enabled. +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-2382 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser +Oct 27 14:03:09.438: INFO: Waiting up to 5m0s for pod "security-context-9c88e01b-d89e-49a3-a970-2481099c078c" in namespace "security-context-2382" to be "Succeeded or Failed" +Oct 27 14:03:09.449: INFO: Pod "security-context-9c88e01b-d89e-49a3-a970-2481099c078c": Phase="Pending", Reason="", readiness=false. Elapsed: 11.343175ms +Oct 27 14:03:11.462: INFO: Pod "security-context-9c88e01b-d89e-49a3-a970-2481099c078c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024152844s +Oct 27 14:03:13.476: INFO: Pod "security-context-9c88e01b-d89e-49a3-a970-2481099c078c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03784192s +STEP: Saw pod success +Oct 27 14:03:13.476: INFO: Pod "security-context-9c88e01b-d89e-49a3-a970-2481099c078c" satisfied condition "Succeeded or Failed" +Oct 27 14:03:13.488: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod security-context-9c88e01b-d89e-49a3-a970-2481099c078c container test-container: +STEP: delete the pod +Oct 27 14:03:13.564: INFO: Waiting for pod security-context-9c88e01b-d89e-49a3-a970-2481099c078c to disappear +Oct 27 14:03:13.575: INFO: Pod security-context-9c88e01b-d89e-49a3-a970-2481099c078c no longer exists +[AfterEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:03:13.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "security-context-2382" for this suite. +•{"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":346,"completed":1,"skipped":34,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:03:13.612: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-4813 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating secret with name secret-test-map-d4735dc9-89af-4633-a03c-fd567dd9c534 +STEP: Creating a pod to test consume secrets +Oct 27 14:03:13.836: INFO: Waiting up to 5m0s for pod "pod-secrets-0af8ff36-27d2-478e-8d63-11223e434562" in namespace "secrets-4813" to be "Succeeded or Failed" +Oct 27 14:03:13.848: INFO: Pod "pod-secrets-0af8ff36-27d2-478e-8d63-11223e434562": Phase="Pending", Reason="", readiness=false. Elapsed: 12.04495ms +Oct 27 14:03:15.862: INFO: Pod "pod-secrets-0af8ff36-27d2-478e-8d63-11223e434562": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025449991s +Oct 27 14:03:17.874: INFO: Pod "pod-secrets-0af8ff36-27d2-478e-8d63-11223e434562": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038292318s +Oct 27 14:03:19.887: INFO: Pod "pod-secrets-0af8ff36-27d2-478e-8d63-11223e434562": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050477587s +Oct 27 14:03:21.931: INFO: Pod "pod-secrets-0af8ff36-27d2-478e-8d63-11223e434562": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.094788556s +STEP: Saw pod success +Oct 27 14:03:21.931: INFO: Pod "pod-secrets-0af8ff36-27d2-478e-8d63-11223e434562" satisfied condition "Succeeded or Failed" +Oct 27 14:03:21.943: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod pod-secrets-0af8ff36-27d2-478e-8d63-11223e434562 container secret-volume-test: +STEP: delete the pod +Oct 27 14:03:22.059: INFO: Waiting for pod pod-secrets-0af8ff36-27d2-478e-8d63-11223e434562 to disappear +Oct 27 14:03:22.071: INFO: Pod pod-secrets-0af8ff36-27d2-478e-8d63-11223e434562 no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:03:22.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-4813" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":346,"completed":2,"skipped":46,"failed":0} +S +------------------------------ +[sig-network] EndpointSlice + should support creating EndpointSlice API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:03:22.105: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename endpointslice +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in endpointslice-8916 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 +[It] should support creating EndpointSlice API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: getting /apis +STEP: getting /apis/discovery.k8s.io +STEP: getting /apis/discovery.k8s.iov1 +STEP: creating +STEP: getting +STEP: listing +STEP: watching +Oct 27 14:03:22.421: INFO: starting watch +STEP: cluster-wide listing +STEP: cluster-wide watching +Oct 27 14:03:22.444: INFO: starting watch +STEP: patching +STEP: updating +Oct 27 14:03:22.492: INFO: waiting for watch events with expected annotations +Oct 27 14:03:22.492: INFO: saw patched and updated annotations +STEP: deleting +STEP: deleting a collection +[AfterEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:03:22.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "endpointslice-8916" for this suite. +•{"msg":"PASSED [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","total":346,"completed":3,"skipped":47,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] EndpointSlice + should have Endpoints and EndpointSlices pointing to API Server [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:03:22.583: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename endpointslice +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in endpointslice-8601 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 +[It] should have Endpoints and EndpointSlices pointing to API Server [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:03:22.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "endpointslice-8601" for this suite. +•{"msg":"PASSED [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","total":346,"completed":4,"skipped":98,"failed":0} +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Variable Expansion + should allow substituting values in a container's args [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:03:22.820: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename var-expansion +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in var-expansion-5855 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should allow substituting values in a container's args [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test substitution in container's args +Oct 27 14:03:23.029: INFO: Waiting up to 5m0s for pod "var-expansion-0c35cf9e-b642-4cff-bd0d-76d87225d7dd" in namespace "var-expansion-5855" to be "Succeeded or Failed" +Oct 27 14:03:23.040: INFO: Pod "var-expansion-0c35cf9e-b642-4cff-bd0d-76d87225d7dd": Phase="Pending", Reason="", readiness=false. Elapsed: 11.134056ms +Oct 27 14:03:25.053: INFO: Pod "var-expansion-0c35cf9e-b642-4cff-bd0d-76d87225d7dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023924622s +Oct 27 14:03:27.065: INFO: Pod "var-expansion-0c35cf9e-b642-4cff-bd0d-76d87225d7dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035919147s +STEP: Saw pod success +Oct 27 14:03:27.065: INFO: Pod "var-expansion-0c35cf9e-b642-4cff-bd0d-76d87225d7dd" satisfied condition "Succeeded or Failed" +Oct 27 14:03:27.076: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod var-expansion-0c35cf9e-b642-4cff-bd0d-76d87225d7dd container dapi-container: +STEP: delete the pod +Oct 27 14:03:27.114: INFO: Waiting for pod var-expansion-0c35cf9e-b642-4cff-bd0d-76d87225d7dd to disappear +Oct 27 14:03:27.125: INFO: Pod var-expansion-0c35cf9e-b642-4cff-bd0d-76d87225d7dd no longer exists +[AfterEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:03:27.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-5855" for this suite. +•{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":346,"completed":5,"skipped":119,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for CRD preserving unknown fields in an embedded object [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:03:27.159: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-5624 +STEP: Waiting for a default service account to be provisioned in namespace +[It] works for CRD preserving unknown fields in an embedded object [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:03:27.350: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: client-side validation (kubectl create and apply) allows request with any unknown properties +Oct 27 14:03:31.537: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-5624 --namespace=crd-publish-openapi-5624 create -f -' +Oct 27 14:03:34.790: INFO: stderr: "" +Oct 27 14:03:34.790: INFO: stdout: "e2e-test-crd-publish-openapi-5997-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" +Oct 27 14:03:34.790: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-5624 --namespace=crd-publish-openapi-5624 delete e2e-test-crd-publish-openapi-5997-crds test-cr' +Oct 27 14:03:34.895: INFO: stderr: "" +Oct 27 14:03:34.895: INFO: stdout: "e2e-test-crd-publish-openapi-5997-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" +Oct 27 14:03:34.895: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-5624 --namespace=crd-publish-openapi-5624 apply -f -' +Oct 27 14:03:35.108: INFO: stderr: "" +Oct 27 14:03:35.108: INFO: stdout: "e2e-test-crd-publish-openapi-5997-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" +Oct 27 14:03:35.108: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-5624 --namespace=crd-publish-openapi-5624 delete e2e-test-crd-publish-openapi-5997-crds test-cr' +Oct 27 14:03:35.202: INFO: stderr: "" +Oct 27 14:03:35.202: INFO: stdout: "e2e-test-crd-publish-openapi-5997-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" +STEP: kubectl explain works to explain CR +Oct 27 14:03:35.202: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-5624 explain e2e-test-crd-publish-openapi-5997-crds' +Oct 27 14:03:35.372: INFO: stderr: "" +Oct 27 14:03:35.372: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-5997-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t<>\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:03:39.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-5624" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":346,"completed":6,"skipped":134,"failed":0} +S +------------------------------ +[sig-api-machinery] Garbage collector + should not be blocked by dependency circle [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:03:39.107: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename gc +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-1916 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not be blocked by dependency circle [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:03:39.379: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"a27dbff4-0428-45fd-afaf-bb1022aab0ed", Controller:(*bool)(0xc002fc72fe), BlockOwnerDeletion:(*bool)(0xc002fc72ff)}} +Oct 27 14:03:39.433: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"d6d879b7-6eb8-41e1-8697-485147c375a8", Controller:(*bool)(0xc00295aeee), BlockOwnerDeletion:(*bool)(0xc00295aeef)}} +Oct 27 14:03:39.449: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"29078890-6865-4263-b71f-8eea055787c3", Controller:(*bool)(0xc0032d43fe), BlockOwnerDeletion:(*bool)(0xc0032d43ff)}} +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:03:44.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-1916" for this suite. +•{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":346,"completed":7,"skipped":135,"failed":0} + +------------------------------ +[sig-cli] Kubectl client Kubectl expose + should create services for rc [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:03:44.510: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-5801 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should create services for rc [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating Agnhost RC +Oct 27 14:03:44.700: INFO: namespace kubectl-5801 +Oct 27 14:03:44.700: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-5801 create -f -' +Oct 27 14:03:44.968: INFO: stderr: "" +Oct 27 14:03:44.968: INFO: stdout: "replicationcontroller/agnhost-primary created\n" +STEP: Waiting for Agnhost primary to start. +Oct 27 14:03:45.981: INFO: Selector matched 1 pods for map[app:agnhost] +Oct 27 14:03:45.981: INFO: Found 0 / 1 +Oct 27 14:03:46.980: INFO: Selector matched 1 pods for map[app:agnhost] +Oct 27 14:03:46.980: INFO: Found 0 / 1 +Oct 27 14:03:47.980: INFO: Selector matched 1 pods for map[app:agnhost] +Oct 27 14:03:47.980: INFO: Found 1 / 1 +Oct 27 14:03:47.980: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 +Oct 27 14:03:47.995: INFO: Selector matched 1 pods for map[app:agnhost] +Oct 27 14:03:47.995: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +Oct 27 14:03:47.995: INFO: wait on agnhost-primary startup in kubectl-5801 +Oct 27 14:03:47.995: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-5801 logs agnhost-primary-tfwpq agnhost-primary' +Oct 27 14:03:48.143: INFO: stderr: "" +Oct 27 14:03:48.143: INFO: stdout: "Paused\n" +STEP: exposing RC +Oct 27 14:03:48.143: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-5801 expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379' +Oct 27 14:03:48.277: INFO: stderr: "" +Oct 27 14:03:48.277: INFO: stdout: "service/rm2 exposed\n" +Oct 27 14:03:48.290: INFO: Service rm2 in namespace kubectl-5801 found. +STEP: exposing service +Oct 27 14:03:50.315: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-5801 expose service rm2 --name=rm3 --port=2345 --target-port=6379' +Oct 27 14:03:50.428: INFO: stderr: "" +Oct 27 14:03:50.428: INFO: stdout: "service/rm3 exposed\n" +Oct 27 14:03:50.439: INFO: Service rm3 in namespace kubectl-5801 found. +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:03:52.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-5801" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":346,"completed":8,"skipped":135,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] DNS + should support configurable pod DNS nameservers [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:03:52.496: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename dns +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-6895 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support configurable pod DNS nameservers [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... +Oct 27 14:03:52.710: INFO: Created pod &Pod{ObjectMeta:{test-dns-nameservers dns-6895 a1eae474-023c-4723-98b2-3c68f595bfba 4005 0 2021-10-27 14:03:52 +0000 UTC map[] map[kubernetes.io/psp:e2e-test-privileged-psp] [] [] [{e2e.test Update v1 2021-10-27 14:03:52 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-p7tf2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmtq6-2hp.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p7tf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 14:03:52.722: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:03:54.734: INFO: The status of Pod test-dns-nameservers is Running (Ready = true) +STEP: Verifying customized DNS suffix list is configured on pod... +Oct 27 14:03:54.734: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-6895 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 14:03:54.734: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Verifying customized DNS server is configured on pod... +Oct 27 14:03:54.993: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-6895 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 14:03:54.993: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 14:03:55.296: INFO: Deleting pod test-dns-nameservers... +[AfterEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:03:55.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-6895" for this suite. +•{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":346,"completed":9,"skipped":159,"failed":0} + +------------------------------ +[sig-storage] Projected secret + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:03:55.348: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-6032 +STEP: Waiting for a default service account to be provisioned in namespace +[It] optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating secret with name s-test-opt-del-809febc5-073b-4066-abb9-f5baf72e5b01 +STEP: Creating secret with name s-test-opt-upd-dc3d374f-430c-428e-9997-1052435e18ca +STEP: Creating the pod +Oct 27 14:03:55.612: INFO: The status of Pod pod-projected-secrets-ce9a129d-50ae-4641-8882-42a53eb2f33d is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:03:57.631: INFO: The status of Pod pod-projected-secrets-ce9a129d-50ae-4641-8882-42a53eb2f33d is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:03:59.624: INFO: The status of Pod pod-projected-secrets-ce9a129d-50ae-4641-8882-42a53eb2f33d is Running (Ready = true) +STEP: Deleting secret s-test-opt-del-809febc5-073b-4066-abb9-f5baf72e5b01 +STEP: Updating secret s-test-opt-upd-dc3d374f-430c-428e-9997-1052435e18ca +STEP: Creating secret with name s-test-opt-create-9e3b6db1-9394-4140-9ce7-b3812b1f7d03 +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:05:12.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-6032" for this suite. +•{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":10,"skipped":159,"failed":0} +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Guestbook application + should create and stop a working application [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:05:12.939: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-7602 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should create and stop a working application [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating all guestbook components +Oct 27 14:05:13.162: INFO: apiVersion: v1 +kind: Service +metadata: + name: agnhost-replica + labels: + app: agnhost + role: replica + tier: backend +spec: + ports: + - port: 6379 + selector: + app: agnhost + role: replica + tier: backend + +Oct 27 14:05:13.162: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-7602 create -f -' +Oct 27 14:05:13.369: INFO: stderr: "" +Oct 27 14:05:13.369: INFO: stdout: "service/agnhost-replica created\n" +Oct 27 14:05:13.369: INFO: apiVersion: v1 +kind: Service +metadata: + name: agnhost-primary + labels: + app: agnhost + role: primary + tier: backend +spec: + ports: + - port: 6379 + targetPort: 6379 + selector: + app: agnhost + role: primary + tier: backend + +Oct 27 14:05:13.369: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-7602 create -f -' +Oct 27 14:05:13.567: INFO: stderr: "" +Oct 27 14:05:13.567: INFO: stdout: "service/agnhost-primary created\n" +Oct 27 14:05:13.568: INFO: apiVersion: v1 +kind: Service +metadata: + name: frontend + labels: + app: guestbook + tier: frontend +spec: + # if your cluster supports it, uncomment the following to automatically create + # an external load-balanced IP for the frontend service. + # type: LoadBalancer + ports: + - port: 80 + selector: + app: guestbook + tier: frontend + +Oct 27 14:05:13.568: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-7602 create -f -' +Oct 27 14:05:13.761: INFO: stderr: "" +Oct 27 14:05:13.761: INFO: stdout: "service/frontend created\n" +Oct 27 14:05:13.761: INFO: apiVersion: apps/v1 +kind: Deployment +metadata: + name: frontend +spec: + replicas: 3 + selector: + matchLabels: + app: guestbook + tier: frontend + template: + metadata: + labels: + app: guestbook + tier: frontend + spec: + containers: + - name: guestbook-frontend + image: k8s.gcr.io/e2e-test-images/agnhost:2.32 + args: [ "guestbook", "--backend-port", "6379" ] + resources: + requests: + cpu: 100m + memory: 100Mi + ports: + - containerPort: 80 + +Oct 27 14:05:13.761: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-7602 create -f -' +Oct 27 14:05:13.947: INFO: stderr: "" +Oct 27 14:05:13.947: INFO: stdout: "deployment.apps/frontend created\n" +Oct 27 14:05:13.947: INFO: apiVersion: apps/v1 +kind: Deployment +metadata: + name: agnhost-primary +spec: + replicas: 1 + selector: + matchLabels: + app: agnhost + role: primary + tier: backend + template: + metadata: + labels: + app: agnhost + role: primary + tier: backend + spec: + containers: + - name: primary + image: k8s.gcr.io/e2e-test-images/agnhost:2.32 + args: [ "guestbook", "--http-port", "6379" ] + resources: + requests: + cpu: 100m + memory: 100Mi + ports: + - containerPort: 6379 + +Oct 27 14:05:13.947: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-7602 create -f -' +Oct 27 14:05:14.148: INFO: stderr: "" +Oct 27 14:05:14.148: INFO: stdout: "deployment.apps/agnhost-primary created\n" +Oct 27 14:05:14.148: INFO: apiVersion: apps/v1 +kind: Deployment +metadata: + name: agnhost-replica +spec: + replicas: 2 + selector: + matchLabels: + app: agnhost + role: replica + tier: backend + template: + metadata: + labels: + app: agnhost + role: replica + tier: backend + spec: + containers: + - name: replica + image: k8s.gcr.io/e2e-test-images/agnhost:2.32 + args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] + resources: + requests: + cpu: 100m + memory: 100Mi + ports: + - containerPort: 6379 + +Oct 27 14:05:14.149: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-7602 create -f -' +Oct 27 14:05:14.335: INFO: stderr: "" +Oct 27 14:05:14.335: INFO: stdout: "deployment.apps/agnhost-replica created\n" +STEP: validating guestbook app +Oct 27 14:05:14.335: INFO: Waiting for all frontend pods to be Running. +Oct 27 14:05:24.388: INFO: Waiting for frontend to serve content. +Oct 27 14:05:24.416: INFO: Trying to add a new entry to the guestbook. +Oct 27 14:05:24.488: INFO: Verifying that added entry can be retrieved. +STEP: using delete to clean up resources +Oct 27 14:05:24.546: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-7602 delete --grace-period=0 --force -f -' +Oct 27 14:05:24.652: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Oct 27 14:05:24.652: INFO: stdout: "service \"agnhost-replica\" force deleted\n" +STEP: using delete to clean up resources +Oct 27 14:05:24.652: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-7602 delete --grace-period=0 --force -f -' +Oct 27 14:05:24.753: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Oct 27 14:05:24.753: INFO: stdout: "service \"agnhost-primary\" force deleted\n" +STEP: using delete to clean up resources +Oct 27 14:05:24.753: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-7602 delete --grace-period=0 --force -f -' +Oct 27 14:05:24.858: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Oct 27 14:05:24.858: INFO: stdout: "service \"frontend\" force deleted\n" +STEP: using delete to clean up resources +Oct 27 14:05:24.858: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-7602 delete --grace-period=0 --force -f -' +Oct 27 14:05:25.030: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Oct 27 14:05:25.030: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" +STEP: using delete to clean up resources +Oct 27 14:05:25.030: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-7602 delete --grace-period=0 --force -f -' +Oct 27 14:05:25.332: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Oct 27 14:05:25.332: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" +STEP: using delete to clean up resources +Oct 27 14:05:25.333: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-7602 delete --grace-period=0 --force -f -' +Oct 27 14:05:25.462: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Oct 27 14:05:25.462: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:05:25.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-7602" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":346,"completed":11,"skipped":176,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:05:25.553: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-6577 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating secret with name projected-secret-test-fd139f74-9c6c-47d1-a7b2-51a3b87dbd47 +STEP: Creating a pod to test consume secrets +Oct 27 14:05:25.779: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-92120a11-75e8-445c-a83e-f726d8b8c8be" in namespace "projected-6577" to be "Succeeded or Failed" +Oct 27 14:05:25.790: INFO: Pod "pod-projected-secrets-92120a11-75e8-445c-a83e-f726d8b8c8be": Phase="Pending", Reason="", readiness=false. Elapsed: 11.20945ms +Oct 27 14:05:27.803: INFO: Pod "pod-projected-secrets-92120a11-75e8-445c-a83e-f726d8b8c8be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024032861s +Oct 27 14:05:29.817: INFO: Pod "pod-projected-secrets-92120a11-75e8-445c-a83e-f726d8b8c8be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037703909s +STEP: Saw pod success +Oct 27 14:05:29.817: INFO: Pod "pod-projected-secrets-92120a11-75e8-445c-a83e-f726d8b8c8be" satisfied condition "Succeeded or Failed" +Oct 27 14:05:29.829: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod pod-projected-secrets-92120a11-75e8-445c-a83e-f726d8b8c8be container secret-volume-test: +STEP: delete the pod +Oct 27 14:05:29.865: INFO: Waiting for pod pod-projected-secrets-92120a11-75e8-445c-a83e-f726d8b8c8be to disappear +Oct 27 14:05:29.876: INFO: Pod pod-projected-secrets-92120a11-75e8-445c-a83e-f726d8b8c8be no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:05:29.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-6577" for this suite. +•{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":346,"completed":12,"skipped":199,"failed":0} +SSSSSSSS +------------------------------ +[sig-apps] ReplicationController + should adopt matching pods on creation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:05:29.911: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename replication-controller +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replication-controller-2854 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 +[It] should adopt matching pods on creation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Given a Pod with a 'name' label pod-adoption is created +Oct 27 14:05:30.148: INFO: The status of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:05:32.161: INFO: The status of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:05:34.163: INFO: The status of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:05:36.163: INFO: The status of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:05:38.162: INFO: The status of Pod pod-adoption is Running (Ready = true) +STEP: When a replication controller with a matching selector is created +STEP: Then the orphan pod is adopted +[AfterEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:05:38.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replication-controller-2854" for this suite. +•{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":346,"completed":13,"skipped":207,"failed":0} +SSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should mutate custom resource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:05:38.239: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-5690 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 27 14:05:38.931: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940338, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940338, loc:(*time.Location)(0xa09bc80)}}, Reason:"NewReplicaSetCreated", Message:"Created new replica set \"sample-webhook-deployment-78988fc6cd\""}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940338, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940338, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} +Oct 27 14:05:40.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940338, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940338, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940338, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940338, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 27 14:05:43.970: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should mutate custom resource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:05:43.982: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Registering the mutating webhook for custom resource e2e-test-webhook-3000-crds.webhook.example.com via the AdmissionRegistration API +STEP: Creating a custom resource that should be mutated by the webhook +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:05:46.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-5690" for this suite. +STEP: Destroying namespace "webhook-5690-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":346,"completed":14,"skipped":214,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-auth] ServiceAccounts + should mount projected service account token [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:05:47.232: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename svcaccounts +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in svcaccounts-105 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should mount projected service account token [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test service account token: +Oct 27 14:05:47.447: INFO: Waiting up to 5m0s for pod "test-pod-64a7b6ae-6903-4e14-a168-442b8d9f8d95" in namespace "svcaccounts-105" to be "Succeeded or Failed" +Oct 27 14:05:47.458: INFO: Pod "test-pod-64a7b6ae-6903-4e14-a168-442b8d9f8d95": Phase="Pending", Reason="", readiness=false. Elapsed: 11.114141ms +Oct 27 14:05:49.531: INFO: Pod "test-pod-64a7b6ae-6903-4e14-a168-442b8d9f8d95": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.083449224s +STEP: Saw pod success +Oct 27 14:05:49.531: INFO: Pod "test-pod-64a7b6ae-6903-4e14-a168-442b8d9f8d95" satisfied condition "Succeeded or Failed" +Oct 27 14:05:49.544: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod test-pod-64a7b6ae-6903-4e14-a168-442b8d9f8d95 container agnhost-container: +STEP: delete the pod +Oct 27 14:05:49.643: INFO: Waiting for pod test-pod-64a7b6ae-6903-4e14-a168-442b8d9f8d95 to disappear +Oct 27 14:05:49.655: INFO: Pod test-pod-64a7b6ae-6903-4e14-a168-442b8d9f8d95 no longer exists +[AfterEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:05:49.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svcaccounts-105" for this suite. +•{"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":346,"completed":15,"skipped":224,"failed":0} +SSSS +------------------------------ +[sig-node] Container Runtime blackbox test on terminated container + should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:05:49.733: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-runtime +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-2489 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the container +STEP: wait for the container to reach Succeeded +STEP: get the container status +STEP: the container should be terminated +STEP: the termination message should be set +Oct 27 14:05:52.067: INFO: Expected: &{OK} to match Container's Termination Message: OK -- +STEP: delete the container +[AfterEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:05:52.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-runtime-2489" for this suite. +•{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":346,"completed":16,"skipped":228,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath + runs ReplicaSets to verify preemption running path [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:05:52.164: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename sched-preemption +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-preemption-8093 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 +Oct 27 14:05:52.653: INFO: Waiting up to 1m0s for all nodes to be ready +Oct 27 14:06:52.963: INFO: Waiting for terminating namespaces to be deleted... +[BeforeEach] PreemptionExecutionPath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:06:52.975: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename sched-preemption-path +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-preemption-path-1892 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] PreemptionExecutionPath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:488 +STEP: Finding an available node +STEP: Trying to launch a pod without a label to get a node which can launch it. +STEP: Explicitly delete pod here to free the resource it takes. +Oct 27 14:06:57.258: INFO: found a healthy node: shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc +[It] runs ReplicaSets to verify preemption running path [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:07:09.641: INFO: pods created so far: [1 1 1] +Oct 27 14:07:09.641: INFO: length of pods created so far: 3 +Oct 27 14:07:11.671: INFO: pods created so far: [2 2 1] +[AfterEach] PreemptionExecutionPath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:07:18.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-preemption-path-1892" for this suite. +[AfterEach] PreemptionExecutionPath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:462 +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:07:18.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-preemption-8093" for this suite. +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 +•{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":346,"completed":17,"skipped":265,"failed":0} +SSSSSSSS +------------------------------ +[sig-network] Proxy version v1 + A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] version v1 + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:07:18.881: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename proxy +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in proxy-8840 +STEP: Waiting for a default service account to be provisioned in namespace +[It] A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:07:19.076: INFO: Creating pod... +Oct 27 14:07:19.107: INFO: Pod Quantity: 1 Status: Pending +Oct 27 14:07:20.120: INFO: Pod Quantity: 1 Status: Pending +Oct 27 14:07:21.121: INFO: Pod Status: Running +Oct 27 14:07:21.121: INFO: Creating service... +Oct 27 14:07:21.140: INFO: Starting http.Client for https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com/api/v1/namespaces/proxy-8840/pods/agnhost/proxy/some/path/with/DELETE +Oct 27 14:07:21.210: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE +Oct 27 14:07:21.210: INFO: Starting http.Client for https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com/api/v1/namespaces/proxy-8840/pods/agnhost/proxy/some/path/with/GET +Oct 27 14:07:21.225: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET +Oct 27 14:07:21.225: INFO: Starting http.Client for https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com/api/v1/namespaces/proxy-8840/pods/agnhost/proxy/some/path/with/HEAD +Oct 27 14:07:21.239: INFO: http.Client request:HEAD | StatusCode:200 +Oct 27 14:07:21.239: INFO: Starting http.Client for https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com/api/v1/namespaces/proxy-8840/pods/agnhost/proxy/some/path/with/OPTIONS +Oct 27 14:07:21.285: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS +Oct 27 14:07:21.285: INFO: Starting http.Client for https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com/api/v1/namespaces/proxy-8840/pods/agnhost/proxy/some/path/with/PATCH +Oct 27 14:07:21.299: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH +Oct 27 14:07:21.299: INFO: Starting http.Client for https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com/api/v1/namespaces/proxy-8840/pods/agnhost/proxy/some/path/with/POST +Oct 27 14:07:21.313: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST +Oct 27 14:07:21.313: INFO: Starting http.Client for https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com/api/v1/namespaces/proxy-8840/pods/agnhost/proxy/some/path/with/PUT +Oct 27 14:07:21.327: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT +Oct 27 14:07:21.327: INFO: Starting http.Client for https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com/api/v1/namespaces/proxy-8840/services/test-service/proxy/some/path/with/DELETE +Oct 27 14:07:21.345: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE +Oct 27 14:07:21.345: INFO: Starting http.Client for https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com/api/v1/namespaces/proxy-8840/services/test-service/proxy/some/path/with/GET +Oct 27 14:07:21.362: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET +Oct 27 14:07:21.362: INFO: Starting http.Client for https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com/api/v1/namespaces/proxy-8840/services/test-service/proxy/some/path/with/HEAD +Oct 27 14:07:21.386: INFO: http.Client request:HEAD | StatusCode:200 +Oct 27 14:07:21.386: INFO: Starting http.Client for https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com/api/v1/namespaces/proxy-8840/services/test-service/proxy/some/path/with/OPTIONS +Oct 27 14:07:21.402: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS +Oct 27 14:07:21.402: INFO: Starting http.Client for https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com/api/v1/namespaces/proxy-8840/services/test-service/proxy/some/path/with/PATCH +Oct 27 14:07:21.419: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH +Oct 27 14:07:21.419: INFO: Starting http.Client for https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com/api/v1/namespaces/proxy-8840/services/test-service/proxy/some/path/with/POST +Oct 27 14:07:21.436: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST +Oct 27 14:07:21.436: INFO: Starting http.Client for https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com/api/v1/namespaces/proxy-8840/services/test-service/proxy/some/path/with/PUT +Oct 27 14:07:21.454: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT +[AfterEach] version v1 + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:07:21.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "proxy-8840" for this suite. +•{"msg":"PASSED [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]","total":346,"completed":18,"skipped":273,"failed":0} +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl describe + should check if kubectl describe prints relevant information for rc and pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:07:21.489: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-7346 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should check if kubectl describe prints relevant information for rc and pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:07:21.685: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-7346 create -f -' +Oct 27 14:07:21.884: INFO: stderr: "" +Oct 27 14:07:21.884: INFO: stdout: "replicationcontroller/agnhost-primary created\n" +Oct 27 14:07:21.884: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-7346 create -f -' +Oct 27 14:07:22.245: INFO: stderr: "" +Oct 27 14:07:22.245: INFO: stdout: "service/agnhost-primary created\n" +STEP: Waiting for Agnhost primary to start. +Oct 27 14:07:23.258: INFO: Selector matched 1 pods for map[app:agnhost] +Oct 27 14:07:23.258: INFO: Found 0 / 1 +Oct 27 14:07:24.257: INFO: Selector matched 1 pods for map[app:agnhost] +Oct 27 14:07:24.257: INFO: Found 0 / 1 +Oct 27 14:07:25.258: INFO: Selector matched 1 pods for map[app:agnhost] +Oct 27 14:07:25.258: INFO: Found 1 / 1 +Oct 27 14:07:25.258: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 +Oct 27 14:07:25.271: INFO: Selector matched 1 pods for map[app:agnhost] +Oct 27 14:07:25.271: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +Oct 27 14:07:25.271: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-7346 describe pod agnhost-primary-9fc9n' +Oct 27 14:07:25.390: INFO: stderr: "" +Oct 27 14:07:25.390: INFO: stdout: "Name: agnhost-primary-9fc9n\nNamespace: kubectl-7346\nPriority: 0\nNode: shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc/10.250.0.3\nStart Time: Wed, 27 Oct 2021 14:07:21 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: cni.projectcalico.org/containerID: aa74d1c2648e331624b7f66e9cf6443917d6528b62de9d27340793b7cbdd0794\n cni.projectcalico.org/podIP: 100.96.1.26/32\n cni.projectcalico.org/podIPs: 100.96.1.26/32\n kubernetes.io/psp: e2e-test-privileged-psp\nStatus: Running\nIP: 100.96.1.26\nIPs:\n IP: 100.96.1.26\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: docker://8dfe5ad98a6829016c136fa00fe98468eff4159258a81cc246cdb513f1db2e9e\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.32\n Image ID: docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Wed, 27 Oct 2021 14:07:23 +0000\n Ready: True\n Restart Count: 0\n Environment:\n KUBERNETES_SERVICE_HOST: api.tmtq6-2hp.it.internal.staging.k8s.ondemand.com\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-29w6m (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-29w6m:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 3s default-scheduler Successfully assigned kubectl-7346/agnhost-primary-9fc9n to shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc\n Normal Pulled 2s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\" already present on machine\n Normal Created 2s kubelet Created container agnhost-primary\n Normal Started 2s kubelet Started container agnhost-primary\n" +Oct 27 14:07:25.391: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-7346 describe rc agnhost-primary' +Oct 27 14:07:25.524: INFO: stderr: "" +Oct 27 14:07:25.524: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-7346\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.32\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: agnhost-primary-9fc9n\n" +Oct 27 14:07:25.524: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-7346 describe service agnhost-primary' +Oct 27 14:07:25.656: INFO: stderr: "" +Oct 27 14:07:25.656: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-7346\nLabels: app=agnhost\n role=primary\nAnnotations: \nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 100.64.221.196\nIPs: 100.64.221.196\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 100.96.1.26:6379\nSession Affinity: None\nEvents: \n" +Oct 27 14:07:25.677: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-7346 describe node shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5' +Oct 27 14:07:25.851: INFO: stderr: "" +Oct 27 14:07:25.851: INFO: stdout: "Name: shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5\nRoles: \nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/instance-type=n1-standard-2\n beta.kubernetes.io/os=linux\n failure-domain.beta.kubernetes.io/region=europe-west1\n failure-domain.beta.kubernetes.io/zone=europe-west1-b\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5\n kubernetes.io/os=linux\n node.kubernetes.io/instance-type=n1-standard-2\n node.kubernetes.io/role=node\n topology.gke.io/zone=europe-west1-b\n topology.kubernetes.io/region=europe-west1\n topology.kubernetes.io/zone=europe-west1-b\n worker.garden.sapcloud.io/group=worker-1\n worker.gardener.cloud/cri-name=docker\n worker.gardener.cloud/pool=worker-1\n worker.gardener.cloud/system-components=true\nAnnotations: checksum/cloud-config-data: dc5b6c0e43d9c87785d0076b629210c508baac68ec4eeff794033321b3432492\n csi.volume.kubernetes.io/nodeid:\n {\"pd.csi.storage.gke.io\":\"projects/sap-gcp-k8s-canary-custom/zones/europe-west1-b/instances/shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5\"}\n node.alpha.kubernetes.io/ttl: 0\n node.machine.sapcloud.io/last-applied-anno-labels-taints:\n {\"metadata\":{\"creationTimestamp\":null,\"labels\":{\"node.kubernetes.io/role\":\"node\",\"worker.garden.sapcloud.io/group\":\"worker-1\",\"worker.gard...\n projectcalico.org/IPv4Address: 10.250.0.2/32\n projectcalico.org/IPv4IPIPTunnelAddr: 100.96.0.1\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Wed, 27 Oct 2021 13:56:01 +0000\nTaints: \nUnschedulable: false\nLease:\n HolderIdentity: shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5\n AcquireTime: \n RenewTime: Wed, 27 Oct 2021 14:07:25 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n ReadonlyFilesystem False Wed, 27 Oct 2021 14:03:22 +0000 Wed, 27 Oct 2021 13:58:19 +0000 FilesystemIsNotReadOnly Filesystem is not read-only\n FrequentUnregisterNetDevice Unknown Wed, 27 Oct 2021 14:03:22 +0000 Wed, 27 Oct 2021 13:58:20 +0000 NoFrequentUnregisterNetDevice error watching journald: failed to stat the log path \"/var/log/journal\": stat /v\n FrequentKubeletRestart Unknown Wed, 27 Oct 2021 14:03:22 +0000 Wed, 27 Oct 2021 13:58:20 +0000 NoFrequentKubeletRestart error watching journald: failed to stat the log path \"/var/log/journal\": stat /v\n FrequentDockerRestart Unknown Wed, 27 Oct 2021 14:03:22 +0000 Wed, 27 Oct 2021 13:58:21 +0000 NoFrequentDockerRestart error watching journald: failed to stat the log path \"/var/log/journal\": stat /v\n FrequentContainerdRestart Unknown Wed, 27 Oct 2021 14:03:22 +0000 Wed, 27 Oct 2021 13:58:21 +0000 NoFrequentContainerdRestart error watching journald: failed to stat the log path \"/var/log/journal\": stat /v\n KernelDeadlock False Wed, 27 Oct 2021 14:03:22 +0000 Wed, 27 Oct 2021 13:58:19 +0000 KernelHasNoDeadlock kernel has no deadlock\n NetworkUnavailable False Wed, 27 Oct 2021 13:58:11 +0000 Wed, 27 Oct 2021 13:58:11 +0000 CalicoIsUp Calico is running on this node\n MemoryPressure False Wed, 27 Oct 2021 14:07:17 +0000 Wed, 27 Oct 2021 13:56:01 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Wed, 27 Oct 2021 14:07:17 +0000 Wed, 27 Oct 2021 13:56:01 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Wed, 27 Oct 2021 14:07:17 +0000 Wed, 27 Oct 2021 13:56:01 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Wed, 27 Oct 2021 14:07:17 +0000 Wed, 27 Oct 2021 13:56:22 +0000 KubeletReady kubelet is posting ready status. AppArmor enabled\nAddresses:\n InternalIP: 10.250.0.2\n Hostname: shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5\nCapacity:\n cpu: 2\n ephemeral-storage: 31423468Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 7623984Ki\n pods: 110\nAllocatable:\n cpu: 1920m\n ephemeral-storage: 30568749647\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 6473008Ki\n pods: 110\nSystem Info:\n Machine ID: 97d25a33aa5e0c43a4c533c9e4d314f5\n System UUID: 97d25a33-aa5e-0c43-a4c5-33c9e4d314f5\n Boot ID: ec00792d-bac0-49a8-8819-f42eaf66bc6d\n Kernel Version: 5.3.18-24.78-default\n OS Image: SUSE Linux Enterprise Server 15 SP2\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: docker://20.10.6-ce\n Kubelet Version: v1.22.2\n Kube-Proxy Version: v1.22.2\nPodCIDR: 100.96.0.0/24\nPodCIDRs: 100.96.0.0/24\nProviderID: gce://sap-gcp-k8s-canary-custom/europe-west1-b/shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5\nNon-terminated Pods: (17 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system addons-nginx-ingress-nginx-ingress-k8s-backend-56d9d84c8c-vv84b 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m\n kube-system apiserver-proxy-sl296 40m (2%) 400m (20%) 40Mi (0%) 500Mi (7%) 11m\n kube-system calico-node-4h2tf 250m (13%) 800m (41%) 100Mi (1%) 700Mi (11%) 9m20s\n kube-system calico-node-vertical-autoscaler-785b5f968-9qxv8 10m (0%) 10m (0%) 50Mi (0%) 50Mi (0%) 12m\n kube-system calico-typha-horizontal-autoscaler-5b58bb446c-s7nwv 10m (0%) 10m (0%) 50Mi (0%) 50Mi (0%) 12m\n kube-system calico-typha-vertical-autoscaler-5c9655cddd-qxmpq 10m (0%) 10m (0%) 50Mi (0%) 50Mi (0%) 12m\n kube-system coredns-6944b5cf58-cqcmx 50m (2%) 250m (13%) 15Mi (0%) 500Mi (7%) 12m\n kube-system coredns-6944b5cf58-qwp9p 50m (2%) 250m (13%) 15Mi (0%) 500Mi (7%) 12m\n kube-system csi-driver-node-l4n7m 40m (2%) 110m (5%) 114Mi (1%) 180Mi (2%) 11m\n kube-system kube-proxy-85xr2 46m (2%) 140m (7%) 47753748 (0%) 145014992 (2%) 7m49s\n kube-system metrics-server-6b8fdcd747-t4xbj 50m (2%) 500m (26%) 150Mi (2%) 1Gi (16%) 12m\n kube-system node-exporter-cwjxv 50m (2%) 150m (7%) 50Mi (0%) 150Mi (2%) 11m\n kube-system node-problem-detector-shkl7 20m (1%) 80m (4%) 20Mi (0%) 80Mi (1%) 11m\n kube-system vpn-shoot-77b49d5987-8ddn6 100m (5%) 400m (20%) 100Mi (1%) 400Mi (6%) 12m\n kubernetes-dashboard dashboard-metrics-scraper-7ccbfc448f-l8nhq 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m\n kubernetes-dashboard kubernetes-dashboard-7888b55b49-xptfd 50m (2%) 200m (10%) 50Mi (0%) 200Mi (3%) 12m\n proxy-8840 agnhost 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6s\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 776m (40%) 3310m (172%)\n memory 890808852 (13%) 4741972176 (71%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Starting 11m kubelet Starting kubelet.\n Normal NodeHasSufficientMemory 11m (x2 over 11m) kubelet Node shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5 status is now: NodeHasSufficientMemory\n Normal NodeHasNoDiskPressure 11m (x2 over 11m) kubelet Node shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5 status is now: NodeHasNoDiskPressure\n Normal NodeHasSufficientPID 11m (x2 over 11m) kubelet Node shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5 status is now: NodeHasSufficientPID\n Normal NodeAllocatableEnforced 11m kubelet Updated Node Allocatable limit across pods\n Normal NodeReady 11m kubelet Node shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5 status is now: NodeReady\n Normal NoFrequentKubeletRestart 9m5s systemd-monitor Node condition FrequentKubeletRestart is now: Unknown, reason: NoFrequentKubeletRestart\n Normal NoFrequentUnregisterNetDevice 9m5s kernel-monitor Node condition FrequentUnregisterNetDevice is now: Unknown, reason: NoFrequentUnregisterNetDevice\n Normal NoFrequentDockerRestart 9m4s systemd-monitor Node condition FrequentDockerRestart is now: Unknown, reason: NoFrequentDockerRestart\n Normal NoFrequentContainerdRestart 9m4s systemd-monitor Node condition FrequentContainerdRestart is now: Unknown, reason: NoFrequentContainerdRestart\n" +Oct 27 14:07:25.851: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-7346 describe namespace kubectl-7346' +Oct 27 14:07:25.974: INFO: stderr: "" +Oct 27 14:07:25.974: INFO: stdout: "Name: kubectl-7346\nLabels: e2e-framework=kubectl\n e2e-run=5a0c32b1-2020-4b27-b9ec-04d5f89fa62f\n kubernetes.io/metadata.name=kubectl-7346\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:07:25.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-7346" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":346,"completed":19,"skipped":293,"failed":0} +SSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl replace + should update a single-container pod's image [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:07:26.013: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-6496 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[BeforeEach] Kubectl replace + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1558 +[It] should update a single-container pod's image [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 +Oct 27 14:07:26.209: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-6496 run e2e-test-httpd-pod --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 --pod-running-timeout=2m0s --labels=run=e2e-test-httpd-pod' +Oct 27 14:07:26.309: INFO: stderr: "" +Oct 27 14:07:26.309: INFO: stdout: "pod/e2e-test-httpd-pod created\n" +STEP: verifying the pod e2e-test-httpd-pod is running +STEP: verifying the pod e2e-test-httpd-pod was created +Oct 27 14:07:31.364: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-6496 get pod e2e-test-httpd-pod -o json' +Oct 27 14:07:31.445: INFO: stderr: "" +Oct 27 14:07:31.446: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"annotations\": {\n \"cni.projectcalico.org/containerID\": \"b67f1d18f51408b44ac5284881f3f313a0379476bbdc5cf0ac59e24243d7f522\",\n \"cni.projectcalico.org/podIP\": \"100.96.1.27/32\",\n \"cni.projectcalico.org/podIPs\": \"100.96.1.27/32\",\n \"kubernetes.io/psp\": \"e2e-test-privileged-psp\"\n },\n \"creationTimestamp\": \"2021-10-27T14:07:26Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-6496\",\n \"resourceVersion\": \"5547\",\n \"uid\": \"4e2490b9-7ea7-4806-8cdb-9d2dabfe8ae3\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"env\": [\n {\n \"name\": \"KUBERNETES_SERVICE_HOST\",\n \"value\": \"api.tmtq6-2hp.it.internal.staging.k8s.ondemand.com\"\n }\n ],\n \"image\": \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"kube-api-access-mv9pg\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"kube-api-access-mv9pg\",\n \"projected\": {\n \"defaultMode\": 420,\n \"sources\": [\n {\n \"serviceAccountToken\": {\n \"expirationSeconds\": 3607,\n \"path\": \"token\"\n }\n },\n {\n \"configMap\": {\n \"items\": [\n {\n \"key\": \"ca.crt\",\n \"path\": \"ca.crt\"\n }\n ],\n \"name\": \"kube-root-ca.crt\"\n }\n },\n {\n \"downwardAPI\": {\n \"items\": [\n {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"metadata.namespace\"\n },\n \"path\": \"namespace\"\n }\n ]\n }\n }\n ]\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-10-27T14:07:26Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-10-27T14:07:27Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-10-27T14:07:27Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-10-27T14:07:26Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"docker://c14a1317e39b6a78251e6ab2d5b8aedd14a5ab1f550a2d88db25556093d66760\",\n \"image\": \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\",\n \"imageID\": \"docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2021-10-27T14:07:27Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.250.0.3\",\n \"phase\": \"Running\",\n \"podIP\": \"100.96.1.27\",\n \"podIPs\": [\n {\n \"ip\": \"100.96.1.27\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2021-10-27T14:07:26Z\"\n }\n}\n" +STEP: replace the image in the pod +Oct 27 14:07:31.446: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-6496 replace -f -' +Oct 27 14:07:31.633: INFO: stderr: "" +Oct 27 14:07:31.633: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" +STEP: verifying the pod e2e-test-httpd-pod has the right image k8s.gcr.io/e2e-test-images/busybox:1.29-1 +[AfterEach] Kubectl replace + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562 +Oct 27 14:07:31.645: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-6496 delete pods e2e-test-httpd-pod' +Oct 27 14:07:33.891: INFO: stderr: "" +Oct 27 14:07:33.891: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:07:33.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-6496" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":346,"completed":20,"skipped":300,"failed":0} +SS +------------------------------ +[sig-node] RuntimeClass + should support RuntimeClasses API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] RuntimeClass + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:07:33.925: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename runtimeclass +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in runtimeclass-381 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support RuntimeClasses API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: getting /apis +STEP: getting /apis/node.k8s.io +STEP: getting /apis/node.k8s.io/v1 +STEP: creating +STEP: watching +Oct 27 14:07:34.203: INFO: starting watch +STEP: getting +STEP: listing +STEP: patching +STEP: updating +Oct 27 14:07:34.275: INFO: waiting for watch events with expected annotations +STEP: deleting +STEP: deleting a collection +[AfterEach] [sig-node] RuntimeClass + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:07:34.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "runtimeclass-381" for this suite. +•{"msg":"PASSED [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance]","total":346,"completed":21,"skipped":302,"failed":0} +SSS +------------------------------ +[sig-auth] ServiceAccounts + should allow opting out of API token automount [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:07:34.367: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename svcaccounts +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in svcaccounts-4859 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should allow opting out of API token automount [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: getting the auto-created API token +Oct 27 14:07:35.121: INFO: created pod pod-service-account-defaultsa +Oct 27 14:07:35.121: INFO: pod pod-service-account-defaultsa service account token volume mount: true +Oct 27 14:07:35.140: INFO: created pod pod-service-account-mountsa +Oct 27 14:07:35.140: INFO: pod pod-service-account-mountsa service account token volume mount: true +Oct 27 14:07:35.158: INFO: created pod pod-service-account-nomountsa +Oct 27 14:07:35.158: INFO: pod pod-service-account-nomountsa service account token volume mount: false +Oct 27 14:07:35.176: INFO: created pod pod-service-account-defaultsa-mountspec +Oct 27 14:07:35.176: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true +Oct 27 14:07:35.193: INFO: created pod pod-service-account-mountsa-mountspec +Oct 27 14:07:35.193: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true +Oct 27 14:07:35.236: INFO: created pod pod-service-account-nomountsa-mountspec +Oct 27 14:07:35.236: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true +Oct 27 14:07:35.251: INFO: created pod pod-service-account-defaultsa-nomountspec +Oct 27 14:07:35.251: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false +Oct 27 14:07:35.268: INFO: created pod pod-service-account-mountsa-nomountspec +Oct 27 14:07:35.268: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false +Oct 27 14:07:35.335: INFO: created pod pod-service-account-nomountsa-nomountspec +Oct 27 14:07:35.335: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false +[AfterEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:07:35.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svcaccounts-4859" for this suite. +•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":346,"completed":22,"skipped":305,"failed":0} +S +------------------------------ +[sig-apps] Deployment + should validate Deployment Status endpoints [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:07:35.369: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename deployment +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in deployment-7318 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 +[It] should validate Deployment Status endpoints [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a Deployment +Oct 27 14:07:35.590: INFO: Creating simple deployment test-deployment-rnc78 +Oct 27 14:07:35.640: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940455, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940455, loc:(*time.Location)(0xa09bc80)}}, Reason:"NewReplicaSetCreated", Message:"Created new replica set \"test-deployment-rnc78-794dd694d8\""}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940455, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940455, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} +Oct 27 14:07:37.652: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940455, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940455, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940455, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940455, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-deployment-rnc78-794dd694d8\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 27 14:07:39.653: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940455, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940455, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940455, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940455, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-deployment-rnc78-794dd694d8\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 27 14:07:41.654: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940455, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940455, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940455, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940455, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-deployment-rnc78-794dd694d8\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 27 14:07:43.654: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940455, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940455, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940455, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940455, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-deployment-rnc78-794dd694d8\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Getting /status +Oct 27 14:07:45.677: INFO: Deployment test-deployment-rnc78 has Conditions: [{Available True 2021-10-27 14:07:44 +0000 UTC 2021-10-27 14:07:44 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2021-10-27 14:07:44 +0000 UTC 2021-10-27 14:07:35 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-rnc78-794dd694d8" has successfully progressed.}] +STEP: updating Deployment Status +Oct 27 14:07:45.703: INFO: updatedStatus.Conditions: []v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940464, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940464, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940464, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770940455, loc:(*time.Location)(0xa09bc80)}}, Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"test-deployment-rnc78-794dd694d8\" has successfully progressed."}, v1.DeploymentCondition{Type:"StatusUpdate", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"E2E", Message:"Set from e2e test"}} +STEP: watching for the Deployment status to be updated +Oct 27 14:07:45.714: INFO: Observed &Deployment event: ADDED +Oct 27 14:07:45.714: INFO: Observed Deployment test-deployment-rnc78 in namespace deployment-7318 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2021-10-27 14:07:35 +0000 UTC 2021-10-27 14:07:35 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-rnc78-794dd694d8"} +Oct 27 14:07:45.714: INFO: Observed &Deployment event: MODIFIED +Oct 27 14:07:45.714: INFO: Observed Deployment test-deployment-rnc78 in namespace deployment-7318 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2021-10-27 14:07:35 +0000 UTC 2021-10-27 14:07:35 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-rnc78-794dd694d8"} +Oct 27 14:07:45.715: INFO: Observed Deployment test-deployment-rnc78 in namespace deployment-7318 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2021-10-27 14:07:35 +0000 UTC 2021-10-27 14:07:35 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} +Oct 27 14:07:45.715: INFO: Observed &Deployment event: MODIFIED +Oct 27 14:07:45.715: INFO: Observed Deployment test-deployment-rnc78 in namespace deployment-7318 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2021-10-27 14:07:35 +0000 UTC 2021-10-27 14:07:35 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} +Oct 27 14:07:45.715: INFO: Observed Deployment test-deployment-rnc78 in namespace deployment-7318 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2021-10-27 14:07:35 +0000 UTC 2021-10-27 14:07:35 +0000 UTC ReplicaSetUpdated ReplicaSet "test-deployment-rnc78-794dd694d8" is progressing.} +Oct 27 14:07:45.715: INFO: Observed &Deployment event: MODIFIED +Oct 27 14:07:45.715: INFO: Observed Deployment test-deployment-rnc78 in namespace deployment-7318 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2021-10-27 14:07:44 +0000 UTC 2021-10-27 14:07:44 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} +Oct 27 14:07:45.715: INFO: Observed Deployment test-deployment-rnc78 in namespace deployment-7318 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2021-10-27 14:07:44 +0000 UTC 2021-10-27 14:07:35 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-rnc78-794dd694d8" has successfully progressed.} +Oct 27 14:07:45.715: INFO: Observed &Deployment event: MODIFIED +Oct 27 14:07:45.715: INFO: Observed Deployment test-deployment-rnc78 in namespace deployment-7318 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2021-10-27 14:07:44 +0000 UTC 2021-10-27 14:07:44 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} +Oct 27 14:07:45.715: INFO: Observed Deployment test-deployment-rnc78 in namespace deployment-7318 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2021-10-27 14:07:44 +0000 UTC 2021-10-27 14:07:35 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-rnc78-794dd694d8" has successfully progressed.} +Oct 27 14:07:45.715: INFO: Found Deployment test-deployment-rnc78 in namespace deployment-7318 with labels: map[e2e:testing name:httpd] annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} +Oct 27 14:07:45.715: INFO: Deployment test-deployment-rnc78 has an updated status +STEP: patching the Statefulset Status +Oct 27 14:07:45.715: INFO: Patch payload: {"status":{"conditions":[{"type":"StatusPatched","status":"True"}]}} +Oct 27 14:07:45.729: INFO: Patched status conditions: []v1.DeploymentCondition{v1.DeploymentCondition{Type:"StatusPatched", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"", Message:""}} +STEP: watching for the Deployment status to be patched +Oct 27 14:07:45.739: INFO: Observed &Deployment event: ADDED +Oct 27 14:07:45.739: INFO: Observed deployment test-deployment-rnc78 in namespace deployment-7318 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2021-10-27 14:07:35 +0000 UTC 2021-10-27 14:07:35 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-rnc78-794dd694d8"} +Oct 27 14:07:45.739: INFO: Observed &Deployment event: MODIFIED +Oct 27 14:07:45.739: INFO: Observed deployment test-deployment-rnc78 in namespace deployment-7318 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2021-10-27 14:07:35 +0000 UTC 2021-10-27 14:07:35 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-rnc78-794dd694d8"} +Oct 27 14:07:45.739: INFO: Observed deployment test-deployment-rnc78 in namespace deployment-7318 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2021-10-27 14:07:35 +0000 UTC 2021-10-27 14:07:35 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} +Oct 27 14:07:45.739: INFO: Observed &Deployment event: MODIFIED +Oct 27 14:07:45.739: INFO: Observed deployment test-deployment-rnc78 in namespace deployment-7318 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2021-10-27 14:07:35 +0000 UTC 2021-10-27 14:07:35 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} +Oct 27 14:07:45.739: INFO: Observed deployment test-deployment-rnc78 in namespace deployment-7318 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2021-10-27 14:07:35 +0000 UTC 2021-10-27 14:07:35 +0000 UTC ReplicaSetUpdated ReplicaSet "test-deployment-rnc78-794dd694d8" is progressing.} +Oct 27 14:07:45.740: INFO: Observed &Deployment event: MODIFIED +Oct 27 14:07:45.740: INFO: Observed deployment test-deployment-rnc78 in namespace deployment-7318 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2021-10-27 14:07:44 +0000 UTC 2021-10-27 14:07:44 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} +Oct 27 14:07:45.740: INFO: Observed deployment test-deployment-rnc78 in namespace deployment-7318 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2021-10-27 14:07:44 +0000 UTC 2021-10-27 14:07:35 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-rnc78-794dd694d8" has successfully progressed.} +Oct 27 14:07:45.740: INFO: Observed &Deployment event: MODIFIED +Oct 27 14:07:45.740: INFO: Observed deployment test-deployment-rnc78 in namespace deployment-7318 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2021-10-27 14:07:44 +0000 UTC 2021-10-27 14:07:44 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} +Oct 27 14:07:45.740: INFO: Observed deployment test-deployment-rnc78 in namespace deployment-7318 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2021-10-27 14:07:44 +0000 UTC 2021-10-27 14:07:35 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-rnc78-794dd694d8" has successfully progressed.} +Oct 27 14:07:45.740: INFO: Observed deployment test-deployment-rnc78 in namespace deployment-7318 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} +Oct 27 14:07:45.740: INFO: Observed &Deployment event: MODIFIED +Oct 27 14:07:45.740: INFO: Found deployment test-deployment-rnc78 in namespace deployment-7318 with labels: map[e2e:testing name:httpd] annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusPatched True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } +Oct 27 14:07:45.740: INFO: Deployment test-deployment-rnc78 has a patched status +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 +Oct 27 14:07:45.753: INFO: Deployment "test-deployment-rnc78": +&Deployment{ObjectMeta:{test-deployment-rnc78 deployment-7318 7cc20262-0065-4388-909c-2e7adf6becf9 5814 1 2021-10-27 14:07:35 +0000 UTC map[e2e:testing name:httpd] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2021-10-27 14:07:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {e2e.test Update apps/v1 2021-10-27 14:07:45 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"StatusPatched\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update apps/v1 2021-10-27 14:07:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{e2e: testing,name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[e2e:testing name:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002d24d48 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:StatusPatched,Status:True,Reason:,Message:,LastUpdateTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,},DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-10-27 14:07:45 +0000 UTC,LastTransitionTime:2021-10-27 14:07:45 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-deployment-rnc78-794dd694d8" has successfully progressed.,LastUpdateTime:2021-10-27 14:07:45 +0000 UTC,LastTransitionTime:2021-10-27 14:07:45 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} + +Oct 27 14:07:45.765: INFO: New ReplicaSet "test-deployment-rnc78-794dd694d8" of Deployment "test-deployment-rnc78": +&ReplicaSet{ObjectMeta:{test-deployment-rnc78-794dd694d8 deployment-7318 cb613870-d771-4957-b1ec-2df165f59656 5800 1 2021-10-27 14:07:35 +0000 UTC map[e2e:testing name:httpd pod-template-hash:794dd694d8] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-deployment-rnc78 7cc20262-0065-4388-909c-2e7adf6becf9 0xc002d25377 0xc002d25378}] [] [{kube-controller-manager Update apps/v1 2021-10-27 14:07:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7cc20262-0065-4388-909c-2e7adf6becf9\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 14:07:44 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{e2e: testing,name: httpd,pod-template-hash: 794dd694d8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[e2e:testing name:httpd pod-template-hash:794dd694d8] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002d254c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} +Oct 27 14:07:45.777: INFO: Pod "test-deployment-rnc78-794dd694d8-sn8tt" is available: +&Pod{ObjectMeta:{test-deployment-rnc78-794dd694d8-sn8tt test-deployment-rnc78-794dd694d8- deployment-7318 ab0c4996-368f-48fb-bce1-fa2891dbe5ec 5799 0 2021-10-27 14:07:35 +0000 UTC map[e2e:testing name:httpd pod-template-hash:794dd694d8] map[cni.projectcalico.org/containerID:c42a22ade3172d0622f9857fb616dde3ba832d00ee7ec077f6c500d503ceeb7c cni.projectcalico.org/podIP:100.96.0.21/32 cni.projectcalico.org/podIPs:100.96.0.21/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet test-deployment-rnc78-794dd694d8 cb613870-d771-4957-b1ec-2df165f59656 0xc002d25a47 0xc002d25a48}] [] [{kube-controller-manager Update v1 2021-10-27 14:07:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cb613870-d771-4957-b1ec-2df165f59656\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-27 14:07:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-27 14:07:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.0.21\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-lhh7r,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmtq6-2hp.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lhh7r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:07:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:07:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:07:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:07:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.2,PodIP:100.96.0.21,StartTime:2021-10-27 14:07:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-27 14:07:43 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://b19ac3f8b6e0330ad05865da147cadf480f657d8b8dc5ef368f4e46189875f0f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.0.21,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:07:45.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-7318" for this suite. +•{"msg":"PASSED [sig-apps] Deployment should validate Deployment Status endpoints [Conformance]","total":346,"completed":23,"skipped":306,"failed":0} +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should be able to create a functioning NodePort service [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:07:45.804: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-4873 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should be able to create a functioning NodePort service [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating service nodeport-test with type=NodePort in namespace services-4873 +STEP: creating replication controller nodeport-test in namespace services-4873 +I1027 14:07:46.032147 5683 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-4873, replica count: 2 +Oct 27 14:07:49.083: INFO: Creating new exec pod +I1027 14:07:49.083187 5683 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Oct 27 14:07:52.154: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-4873 exec execpodm6dnw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' +Oct 27 14:07:52.570: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n" +Oct 27 14:07:52.570: INFO: stdout: "" +Oct 27 14:07:53.571: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-4873 exec execpodm6dnw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' +Oct 27 14:07:54.015: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n" +Oct 27 14:07:54.015: INFO: stdout: "nodeport-test-zt9zs" +Oct 27 14:07:54.015: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-4873 exec execpodm6dnw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.64.96.121 80' +Oct 27 14:07:54.395: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 100.64.96.121 80\nConnection to 100.64.96.121 80 port [tcp/http] succeeded!\n" +Oct 27 14:07:54.395: INFO: stdout: "" +Oct 27 14:07:55.395: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-4873 exec execpodm6dnw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.64.96.121 80' +Oct 27 14:07:55.748: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 100.64.96.121 80\nConnection to 100.64.96.121 80 port [tcp/http] succeeded!\n" +Oct 27 14:07:55.748: INFO: stdout: "" +Oct 27 14:07:56.396: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-4873 exec execpodm6dnw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.64.96.121 80' +Oct 27 14:07:56.796: INFO: stderr: "+ nc -v -t -w 2 100.64.96.121 80\n+ echo hostName\nConnection to 100.64.96.121 80 port [tcp/http] succeeded!\n" +Oct 27 14:07:56.796: INFO: stdout: "nodeport-test-zt9zs" +Oct 27 14:07:56.796: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-4873 exec execpodm6dnw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.250.0.2 32192' +Oct 27 14:07:57.127: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.250.0.2 32192\nConnection to 10.250.0.2 32192 port [tcp/*] succeeded!\n" +Oct 27 14:07:57.127: INFO: stdout: "" +Oct 27 14:07:58.127: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-4873 exec execpodm6dnw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.250.0.2 32192' +Oct 27 14:07:58.581: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.250.0.2 32192\nConnection to 10.250.0.2 32192 port [tcp/*] succeeded!\n" +Oct 27 14:07:58.581: INFO: stdout: "nodeport-test-zt9zs" +Oct 27 14:07:58.581: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-4873 exec execpodm6dnw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.250.0.3 32192' +Oct 27 14:07:58.957: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.250.0.3 32192\nConnection to 10.250.0.3 32192 port [tcp/*] succeeded!\n" +Oct 27 14:07:58.957: INFO: stdout: "nodeport-test-t52qj" +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:07:58.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-4873" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":346,"completed":24,"skipped":327,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:07:58.991: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-4410 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0777 on node default medium +Oct 27 14:07:59.301: INFO: Waiting up to 5m0s for pod "pod-efd9c50a-49ce-4565-ba46-c208a60377ce" in namespace "emptydir-4410" to be "Succeeded or Failed" +Oct 27 14:07:59.313: INFO: Pod "pod-efd9c50a-49ce-4565-ba46-c208a60377ce": Phase="Pending", Reason="", readiness=false. Elapsed: 11.418827ms +Oct 27 14:08:01.324: INFO: Pod "pod-efd9c50a-49ce-4565-ba46-c208a60377ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.023023552s +STEP: Saw pod success +Oct 27 14:08:01.324: INFO: Pod "pod-efd9c50a-49ce-4565-ba46-c208a60377ce" satisfied condition "Succeeded or Failed" +Oct 27 14:08:01.336: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5 pod pod-efd9c50a-49ce-4565-ba46-c208a60377ce container test-container: +STEP: delete the pod +Oct 27 14:08:01.373: INFO: Waiting for pod pod-efd9c50a-49ce-4565-ba46-c208a60377ce to disappear +Oct 27 14:08:01.385: INFO: Pod pod-efd9c50a-49ce-4565-ba46-c208a60377ce no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:08:01.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-4410" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":25,"skipped":356,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Namespaces [Serial] + should ensure that all pods are removed when a namespace is deleted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:08:01.419: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename namespaces +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in namespaces-8174 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should ensure that all pods are removed when a namespace is deleted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a test namespace +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nsdeletetest-9241 +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Creating a pod in the namespace +STEP: Waiting for the pod to have running status +STEP: Deleting the namespace +STEP: Waiting for the namespace to be removed. +STEP: Recreating the namespace +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nsdeletetest-3738 +STEP: Verifying there are no pods in the namespace +[AfterEach] [sig-api-machinery] Namespaces [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:08:17.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "namespaces-8174" for this suite. +STEP: Destroying namespace "nsdeletetest-9241" for this suite. +Oct 27 14:08:17.272: INFO: Namespace nsdeletetest-9241 was already deleted +STEP: Destroying namespace "nsdeletetest-3738" for this suite. +•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":346,"completed":26,"skipped":422,"failed":0} +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide container's memory request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:08:17.285: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-9999 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should provide container's memory request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 27 14:08:17.497: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b9f0d165-1946-4828-a860-ddb27e79d58a" in namespace "downward-api-9999" to be "Succeeded or Failed" +Oct 27 14:08:17.508: INFO: Pod "downwardapi-volume-b9f0d165-1946-4828-a860-ddb27e79d58a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.970865ms +Oct 27 14:08:19.520: INFO: Pod "downwardapi-volume-b9f0d165-1946-4828-a860-ddb27e79d58a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023692544s +Oct 27 14:08:21.533: INFO: Pod "downwardapi-volume-b9f0d165-1946-4828-a860-ddb27e79d58a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036759294s +STEP: Saw pod success +Oct 27 14:08:21.534: INFO: Pod "downwardapi-volume-b9f0d165-1946-4828-a860-ddb27e79d58a" satisfied condition "Succeeded or Failed" +Oct 27 14:08:21.545: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5 pod downwardapi-volume-b9f0d165-1946-4828-a860-ddb27e79d58a container client-container: +STEP: delete the pod +Oct 27 14:08:21.584: INFO: Waiting for pod downwardapi-volume-b9f0d165-1946-4828-a860-ddb27e79d58a to disappear +Oct 27 14:08:21.595: INFO: Pod downwardapi-volume-b9f0d165-1946-4828-a860-ddb27e79d58a no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:08:21.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-9999" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":346,"completed":27,"skipped":442,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:08:21.628: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-8367 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name configmap-test-volume-89ed752d-be9f-4e53-aeea-59b8a9460150 +STEP: Creating a pod to test consume configMaps +Oct 27 14:08:21.858: INFO: Waiting up to 5m0s for pod "pod-configmaps-10620823-2701-4d71-81c0-ecd79a8d14dc" in namespace "configmap-8367" to be "Succeeded or Failed" +Oct 27 14:08:21.869: INFO: Pod "pod-configmaps-10620823-2701-4d71-81c0-ecd79a8d14dc": Phase="Pending", Reason="", readiness=false. Elapsed: 11.339279ms +Oct 27 14:08:23.883: INFO: Pod "pod-configmaps-10620823-2701-4d71-81c0-ecd79a8d14dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024984695s +Oct 27 14:08:25.896: INFO: Pod "pod-configmaps-10620823-2701-4d71-81c0-ecd79a8d14dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038357892s +STEP: Saw pod success +Oct 27 14:08:25.896: INFO: Pod "pod-configmaps-10620823-2701-4d71-81c0-ecd79a8d14dc" satisfied condition "Succeeded or Failed" +Oct 27 14:08:25.908: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5 pod pod-configmaps-10620823-2701-4d71-81c0-ecd79a8d14dc container agnhost-container: +STEP: delete the pod +Oct 27 14:08:25.945: INFO: Waiting for pod pod-configmaps-10620823-2701-4d71-81c0-ecd79a8d14dc to disappear +Oct 27 14:08:25.957: INFO: Pod pod-configmaps-10620823-2701-4d71-81c0-ecd79a8d14dc no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:08:25.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-8367" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":346,"completed":28,"skipped":453,"failed":0} +S +------------------------------ +[sig-node] ConfigMap + should fail to create ConfigMap with empty key [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:08:25.996: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-2498 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should fail to create ConfigMap with empty key [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap that has name configmap-test-emptyKey-92f15e9b-b2a0-4b82-8484-f69e5a98ad3f +[AfterEach] [sig-node] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:08:26.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-2498" for this suite. +•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":346,"completed":29,"skipped":454,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-instrumentation] Events + should delete a collection of events [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-instrumentation] Events + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:08:26.231: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename events +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in events-6369 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should delete a collection of events [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Create set of events +Oct 27 14:08:26.439: INFO: created test-event-1 +Oct 27 14:08:26.452: INFO: created test-event-2 +Oct 27 14:08:26.465: INFO: created test-event-3 +STEP: get a list of Events with a label in the current namespace +STEP: delete collection of events +Oct 27 14:08:26.477: INFO: requesting DeleteCollection of events +STEP: check that the list of events matches the requested quantity +Oct 27 14:08:26.502: INFO: requesting list of events to confirm quantity +[AfterEach] [sig-instrumentation] Events + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:08:26.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "events-6369" for this suite. +•{"msg":"PASSED [sig-instrumentation] Events should delete a collection of events [Conformance]","total":346,"completed":30,"skipped":527,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:08:26.540: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-515 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 27 14:08:26.765: INFO: Waiting up to 5m0s for pod "downwardapi-volume-06ff73ba-eb43-4111-bfcb-1dfbe3e15fc4" in namespace "downward-api-515" to be "Succeeded or Failed" +Oct 27 14:08:26.777: INFO: Pod "downwardapi-volume-06ff73ba-eb43-4111-bfcb-1dfbe3e15fc4": Phase="Pending", Reason="", readiness=false. Elapsed: 11.894614ms +Oct 27 14:08:28.788: INFO: Pod "downwardapi-volume-06ff73ba-eb43-4111-bfcb-1dfbe3e15fc4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023597871s +Oct 27 14:08:30.801: INFO: Pod "downwardapi-volume-06ff73ba-eb43-4111-bfcb-1dfbe3e15fc4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036413757s +STEP: Saw pod success +Oct 27 14:08:30.801: INFO: Pod "downwardapi-volume-06ff73ba-eb43-4111-bfcb-1dfbe3e15fc4" satisfied condition "Succeeded or Failed" +Oct 27 14:08:30.812: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5 pod downwardapi-volume-06ff73ba-eb43-4111-bfcb-1dfbe3e15fc4 container client-container: +STEP: delete the pod +Oct 27 14:08:30.849: INFO: Waiting for pod downwardapi-volume-06ff73ba-eb43-4111-bfcb-1dfbe3e15fc4 to disappear +Oct 27 14:08:30.860: INFO: Pod downwardapi-volume-06ff73ba-eb43-4111-bfcb-1dfbe3e15fc4 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:08:30.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-515" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":346,"completed":31,"skipped":560,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-network] DNS + should provide DNS for ExternalName services [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:08:30.893: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename dns +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-9135 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide DNS for ExternalName services [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a test externalName service +STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9135.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-9135.svc.cluster.local; sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9135.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-9135.svc.cluster.local; sleep 1; done + +STEP: creating a pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Oct 27 14:08:45.253: INFO: DNS probes using dns-test-09b5e391-7498-4a67-9e71-b67f37ab17b6 succeeded + +STEP: deleting the pod +STEP: changing the externalName to bar.example.com +STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9135.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-9135.svc.cluster.local; sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9135.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-9135.svc.cluster.local; sleep 1; done + +STEP: creating a second pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Oct 27 14:08:49.426: INFO: File wheezy_udp@dns-test-service-3.dns-9135.svc.cluster.local from pod dns-9135/dns-test-f689b450-f2f3-4c4b-94a3-1a4a20470569 contains 'foo.example.com. +' instead of 'bar.example.com.' +Oct 27 14:08:49.441: INFO: File jessie_udp@dns-test-service-3.dns-9135.svc.cluster.local from pod dns-9135/dns-test-f689b450-f2f3-4c4b-94a3-1a4a20470569 contains 'foo.example.com. +' instead of 'bar.example.com.' +Oct 27 14:08:49.441: INFO: Lookups using dns-9135/dns-test-f689b450-f2f3-4c4b-94a3-1a4a20470569 failed for: [wheezy_udp@dns-test-service-3.dns-9135.svc.cluster.local jessie_udp@dns-test-service-3.dns-9135.svc.cluster.local] + +Oct 27 14:08:54.498: INFO: File wheezy_udp@dns-test-service-3.dns-9135.svc.cluster.local from pod dns-9135/dns-test-f689b450-f2f3-4c4b-94a3-1a4a20470569 contains 'foo.example.com. +' instead of 'bar.example.com.' +Oct 27 14:08:54.512: INFO: File jessie_udp@dns-test-service-3.dns-9135.svc.cluster.local from pod dns-9135/dns-test-f689b450-f2f3-4c4b-94a3-1a4a20470569 contains 'foo.example.com. +' instead of 'bar.example.com.' +Oct 27 14:08:54.512: INFO: Lookups using dns-9135/dns-test-f689b450-f2f3-4c4b-94a3-1a4a20470569 failed for: [wheezy_udp@dns-test-service-3.dns-9135.svc.cluster.local jessie_udp@dns-test-service-3.dns-9135.svc.cluster.local] + +Oct 27 14:08:59.456: INFO: File wheezy_udp@dns-test-service-3.dns-9135.svc.cluster.local from pod dns-9135/dns-test-f689b450-f2f3-4c4b-94a3-1a4a20470569 contains 'foo.example.com. +' instead of 'bar.example.com.' +Oct 27 14:08:59.503: INFO: File jessie_udp@dns-test-service-3.dns-9135.svc.cluster.local from pod dns-9135/dns-test-f689b450-f2f3-4c4b-94a3-1a4a20470569 contains 'foo.example.com. +' instead of 'bar.example.com.' +Oct 27 14:08:59.503: INFO: Lookups using dns-9135/dns-test-f689b450-f2f3-4c4b-94a3-1a4a20470569 failed for: [wheezy_udp@dns-test-service-3.dns-9135.svc.cluster.local jessie_udp@dns-test-service-3.dns-9135.svc.cluster.local] + +Oct 27 14:09:04.457: INFO: File wheezy_udp@dns-test-service-3.dns-9135.svc.cluster.local from pod dns-9135/dns-test-f689b450-f2f3-4c4b-94a3-1a4a20470569 contains 'foo.example.com. +' instead of 'bar.example.com.' +Oct 27 14:09:04.507: INFO: Lookups using dns-9135/dns-test-f689b450-f2f3-4c4b-94a3-1a4a20470569 failed for: [wheezy_udp@dns-test-service-3.dns-9135.svc.cluster.local] + +Oct 27 14:09:09.470: INFO: DNS probes using dns-test-f689b450-f2f3-4c4b-94a3-1a4a20470569 succeeded + +STEP: deleting the pod +STEP: changing the service to type=ClusterIP +STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9135.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-9135.svc.cluster.local; sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9135.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-9135.svc.cluster.local; sleep 1; done + +STEP: creating a third pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Oct 27 14:09:13.718: INFO: DNS probes using dns-test-bf662961-c811-40e8-b213-03d61a764425 succeeded + +STEP: deleting the pod +STEP: deleting the test externalName service +[AfterEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:09:13.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-9135" for this suite. +•{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":346,"completed":32,"skipped":573,"failed":0} +SSS +------------------------------ +[sig-api-machinery] Garbage collector + should orphan pods created by rc if delete options say so [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:09:13.791: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename gc +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-2617 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should orphan pods created by rc if delete options say so [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the rc +STEP: delete the rc +STEP: wait for the rc to be deleted +STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods +STEP: Gathering metrics +Oct 27 14:09:54.108: INFO: For apiserver_request_total: +For apiserver_request_latency_seconds: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +Oct 27 14:09:54.108: INFO: Deleting pod "simpletest.rc-4jpz4" in namespace "gc-2617" +W1027 14:09:54.108603 5683 metrics_grabber.go:151] Can't find kube-controller-manager pod. Grabbing metrics from kube-controller-manager is disabled. +Oct 27 14:09:54.125: INFO: Deleting pod "simpletest.rc-9l82q" in namespace "gc-2617" +Oct 27 14:09:54.152: INFO: Deleting pod "simpletest.rc-9ncfp" in namespace "gc-2617" +Oct 27 14:09:54.170: INFO: Deleting pod "simpletest.rc-jv45p" in namespace "gc-2617" +Oct 27 14:09:54.190: INFO: Deleting pod "simpletest.rc-kthrh" in namespace "gc-2617" +Oct 27 14:09:54.229: INFO: Deleting pod "simpletest.rc-nbcbp" in namespace "gc-2617" +Oct 27 14:09:54.246: INFO: Deleting pod "simpletest.rc-nd6td" in namespace "gc-2617" +Oct 27 14:09:54.263: INFO: Deleting pod "simpletest.rc-nk82n" in namespace "gc-2617" +Oct 27 14:09:54.277: INFO: Deleting pod "simpletest.rc-qlmzd" in namespace "gc-2617" +Oct 27 14:09:54.301: INFO: Deleting pod "simpletest.rc-rgzbk" in namespace "gc-2617" +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:09:54.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-2617" for this suite. +•{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":346,"completed":33,"skipped":576,"failed":0} +SSSS +------------------------------ +[sig-node] Container Runtime blackbox test on terminated container + should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:09:54.360: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-runtime +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-6795 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the container +STEP: wait for the container to reach Failed +STEP: get the container status +STEP: the container should be terminated +STEP: the termination message should be set +Oct 27 14:09:57.638: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- +STEP: delete the container +[AfterEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:09:57.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-runtime-6795" for this suite. +•{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":346,"completed":34,"skipped":580,"failed":0} +SSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:09:57.699: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-7042 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 27 14:09:57.907: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cbb910dc-85ec-4a6d-91b8-09433988391c" in namespace "projected-7042" to be "Succeeded or Failed" +Oct 27 14:09:57.920: INFO: Pod "downwardapi-volume-cbb910dc-85ec-4a6d-91b8-09433988391c": Phase="Pending", Reason="", readiness=false. Elapsed: 13.257654ms +Oct 27 14:09:59.933: INFO: Pod "downwardapi-volume-cbb910dc-85ec-4a6d-91b8-09433988391c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025569379s +Oct 27 14:10:01.945: INFO: Pod "downwardapi-volume-cbb910dc-85ec-4a6d-91b8-09433988391c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038085411s +STEP: Saw pod success +Oct 27 14:10:01.945: INFO: Pod "downwardapi-volume-cbb910dc-85ec-4a6d-91b8-09433988391c" satisfied condition "Succeeded or Failed" +Oct 27 14:10:01.957: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod downwardapi-volume-cbb910dc-85ec-4a6d-91b8-09433988391c container client-container: +STEP: delete the pod +Oct 27 14:10:02.029: INFO: Waiting for pod downwardapi-volume-cbb910dc-85ec-4a6d-91b8-09433988391c to disappear +Oct 27 14:10:02.042: INFO: Pod downwardapi-volume-cbb910dc-85ec-4a6d-91b8-09433988391c no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:10:02.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-7042" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":35,"skipped":588,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should be able to deny pod and configmap creation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:10:02.076: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-4514 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 27 14:10:02.747: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 27 14:10:05.804: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should be able to deny pod and configmap creation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Registering the webhook via the AdmissionRegistration API +STEP: create a pod that should be denied by the webhook +STEP: create a pod that causes the webhook to hang +STEP: create a configmap that should be denied by the webhook +STEP: create a configmap that should be admitted by the webhook +STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook +STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook +STEP: create a namespace that bypass the webhook +STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:10:16.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-4514" for this suite. +STEP: Destroying namespace "webhook-4514-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":346,"completed":36,"skipped":599,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + should be able to convert from CR v1 to CR v2 [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:10:16.739: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename crd-webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-webhook-3780 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 +STEP: Setting up server cert +STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication +STEP: Deploying the custom resource conversion webhook pod +STEP: Wait for the deployment to be ready +Oct 27 14:10:17.723: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 27 14:10:20.780: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 +[It] should be able to convert from CR v1 to CR v2 [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:10:20.792: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Creating a v1 custom resource +STEP: v2 custom resource should be converted +[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:10:23.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-webhook-3780" for this suite. +[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 +•{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":346,"completed":37,"skipped":613,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:10:24.332: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-1266 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 27 14:10:24.597: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f0c679ee-8b44-4b79-835b-f6677009b862" in namespace "projected-1266" to be "Succeeded or Failed" +Oct 27 14:10:24.609: INFO: Pod "downwardapi-volume-f0c679ee-8b44-4b79-835b-f6677009b862": Phase="Pending", Reason="", readiness=false. Elapsed: 12.251911ms +Oct 27 14:10:26.629: INFO: Pod "downwardapi-volume-f0c679ee-8b44-4b79-835b-f6677009b862": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.032401016s +STEP: Saw pod success +Oct 27 14:10:26.629: INFO: Pod "downwardapi-volume-f0c679ee-8b44-4b79-835b-f6677009b862" satisfied condition "Succeeded or Failed" +Oct 27 14:10:26.641: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod downwardapi-volume-f0c679ee-8b44-4b79-835b-f6677009b862 container client-container: +STEP: delete the pod +Oct 27 14:10:26.681: INFO: Waiting for pod downwardapi-volume-f0c679ee-8b44-4b79-835b-f6677009b862 to disappear +Oct 27 14:10:26.728: INFO: Pod downwardapi-volume-f0c679ee-8b44-4b79-835b-f6677009b862 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:10:26.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-1266" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":346,"completed":38,"skipped":642,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:10:26.763: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-5108 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0777 on tmpfs +Oct 27 14:10:27.030: INFO: Waiting up to 5m0s for pod "pod-bbee316e-6e5f-4363-82b3-7bc0064912e0" in namespace "emptydir-5108" to be "Succeeded or Failed" +Oct 27 14:10:27.042: INFO: Pod "pod-bbee316e-6e5f-4363-82b3-7bc0064912e0": Phase="Pending", Reason="", readiness=false. Elapsed: 11.760724ms +Oct 27 14:10:29.055: INFO: Pod "pod-bbee316e-6e5f-4363-82b3-7bc0064912e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.024585866s +STEP: Saw pod success +Oct 27 14:10:29.055: INFO: Pod "pod-bbee316e-6e5f-4363-82b3-7bc0064912e0" satisfied condition "Succeeded or Failed" +Oct 27 14:10:29.066: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod pod-bbee316e-6e5f-4363-82b3-7bc0064912e0 container test-container: +STEP: delete the pod +Oct 27 14:10:32.236: INFO: Waiting for pod pod-bbee316e-6e5f-4363-82b3-7bc0064912e0 to disappear +Oct 27 14:10:32.248: INFO: Pod pod-bbee316e-6e5f-4363-82b3-7bc0064912e0 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:10:32.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-5108" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":39,"skipped":665,"failed":0} +SSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a replication controller. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:10:32.283: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename resourcequota +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-1278 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Counting existing ResourceQuota +STEP: Creating a ResourceQuota +STEP: Ensuring resource quota status is calculated +STEP: Creating a ReplicationController +STEP: Ensuring resource quota status captures replication controller creation +STEP: Deleting a ReplicationController +STEP: Ensuring resource quota status released usage +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:10:43.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-1278" for this suite. +•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":346,"completed":40,"skipped":670,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl label + should update the label on a resource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:10:43.607: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-892 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[BeforeEach] Kubectl label + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1318 +STEP: creating the pod +Oct 27 14:10:43.802: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-892 create -f -' +Oct 27 14:10:44.061: INFO: stderr: "" +Oct 27 14:10:44.061: INFO: stdout: "pod/pause created\n" +Oct 27 14:10:44.061: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] +Oct 27 14:10:44.061: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-892" to be "running and ready" +Oct 27 14:10:44.074: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 12.971758ms +Oct 27 14:10:46.086: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 2.025596651s +Oct 27 14:10:46.086: INFO: Pod "pause" satisfied condition "running and ready" +Oct 27 14:10:46.086: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] +[It] should update the label on a resource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: adding the label testing-label with value testing-label-value to a pod +Oct 27 14:10:46.087: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-892 label pods pause testing-label=testing-label-value' +Oct 27 14:10:46.217: INFO: stderr: "" +Oct 27 14:10:46.217: INFO: stdout: "pod/pause labeled\n" +STEP: verifying the pod has the label testing-label with the value testing-label-value +Oct 27 14:10:46.217: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-892 get pod pause -L testing-label' +Oct 27 14:10:46.310: INFO: stderr: "" +Oct 27 14:10:46.310: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 2s testing-label-value\n" +STEP: removing the label testing-label of a pod +Oct 27 14:10:46.310: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-892 label pods pause testing-label-' +Oct 27 14:10:46.423: INFO: stderr: "" +Oct 27 14:10:46.423: INFO: stdout: "pod/pause labeled\n" +STEP: verifying the pod doesn't have the label testing-label +Oct 27 14:10:46.423: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-892 get pod pause -L testing-label' +Oct 27 14:10:46.514: INFO: stderr: "" +Oct 27 14:10:46.514: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 2s \n" +[AfterEach] Kubectl label + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1324 +STEP: using delete to clean up resources +Oct 27 14:10:46.514: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-892 delete --grace-period=0 --force -f -' +Oct 27 14:10:46.618: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Oct 27 14:10:46.618: INFO: stdout: "pod \"pause\" force deleted\n" +Oct 27 14:10:46.618: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-892 get rc,svc -l name=pause --no-headers' +Oct 27 14:10:46.724: INFO: stderr: "No resources found in kubectl-892 namespace.\n" +Oct 27 14:10:46.724: INFO: stdout: "" +Oct 27 14:10:46.724: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-892 get pods -l name=pause -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' +Oct 27 14:10:46.818: INFO: stderr: "" +Oct 27 14:10:46.818: INFO: stdout: "" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:10:46.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-892" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":346,"completed":41,"skipped":710,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should update annotations on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:10:46.855: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-2313 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should update annotations on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating the pod +Oct 27 14:10:47.107: INFO: The status of Pod annotationupdate70427ba8-2bb0-4c9b-bf64-2980a5aa4a95 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:10:49.121: INFO: The status of Pod annotationupdate70427ba8-2bb0-4c9b-bf64-2980a5aa4a95 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:10:51.122: INFO: The status of Pod annotationupdate70427ba8-2bb0-4c9b-bf64-2980a5aa4a95 is Running (Ready = true) +Oct 27 14:10:51.683: INFO: Successfully updated pod "annotationupdate70427ba8-2bb0-4c9b-bf64-2980a5aa4a95" +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:10:53.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-2313" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":346,"completed":42,"skipped":740,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Pods + should run through the lifecycle of Pods and PodStatus [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:10:53.761: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename pods +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-5659 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:188 +[It] should run through the lifecycle of Pods and PodStatus [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a Pod with a static label +STEP: watching for Pod to be ready +Oct 27 14:10:54.015: INFO: observed Pod pod-test in namespace pods-5659 in phase Pending with labels: map[test-pod-static:true] & conditions [] +Oct 27 14:10:54.015: INFO: observed Pod pod-test in namespace pods-5659 in phase Pending with labels: map[test-pod-static:true] & conditions [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 14:10:53 +0000 UTC }] +Oct 27 14:10:54.023: INFO: observed Pod pod-test in namespace pods-5659 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 14:10:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-27 14:10:53 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-27 14:10:53 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 14:10:53 +0000 UTC }] +Oct 27 14:10:55.011: INFO: observed Pod pod-test in namespace pods-5659 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 14:10:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-27 14:10:53 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-27 14:10:53 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 14:10:53 +0000 UTC }] +Oct 27 14:10:55.291: INFO: Found Pod pod-test in namespace pods-5659 in phase Running with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 14:10:53 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 14:10:55 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 14:10:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 14:10:53 +0000 UTC }] +STEP: patching the Pod with a new Label and updated data +Oct 27 14:10:55.320: INFO: observed event type ADDED +STEP: getting the Pod and ensuring that it's patched +STEP: replacing the Pod's status Ready condition to False +STEP: check the Pod again to ensure its Ready conditions are False +STEP: deleting the Pod via a Collection with a LabelSelector +STEP: watching for the Pod to be deleted +Oct 27 14:10:55.391: INFO: observed event type ADDED +Oct 27 14:10:55.391: INFO: observed event type MODIFIED +Oct 27 14:10:55.391: INFO: observed event type MODIFIED +Oct 27 14:10:55.391: INFO: observed event type MODIFIED +Oct 27 14:10:55.391: INFO: observed event type MODIFIED +Oct 27 14:10:55.391: INFO: observed event type MODIFIED +Oct 27 14:10:55.391: INFO: observed event type MODIFIED +Oct 27 14:10:55.391: INFO: observed event type MODIFIED +Oct 27 14:10:57.334: INFO: observed event type MODIFIED +Oct 27 14:10:57.540: INFO: observed event type MODIFIED +Oct 27 14:10:58.359: INFO: observed event type MODIFIED +Oct 27 14:10:58.367: INFO: observed event type MODIFIED +[AfterEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:10:58.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-5659" for this suite. +•{"msg":"PASSED [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]","total":346,"completed":43,"skipped":770,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-node] Pods + should support retrieving logs from the container over websockets [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:10:58.406: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename pods +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-748 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:188 +[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:10:58.618: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: creating the pod +STEP: submitting the pod to kubernetes +Oct 27 14:10:58.652: INFO: The status of Pod pod-logs-websocket-7ef41a17-58e6-4b14-8a83-e2ca0c2691ee is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:11:00.665: INFO: The status of Pod pod-logs-websocket-7ef41a17-58e6-4b14-8a83-e2ca0c2691ee is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:11:02.665: INFO: The status of Pod pod-logs-websocket-7ef41a17-58e6-4b14-8a83-e2ca0c2691ee is Running (Ready = true) +[AfterEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:11:02.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-748" for this suite. +•{"msg":"PASSED [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":346,"completed":44,"skipped":780,"failed":0} +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:11:02.759: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename resourcequota +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-8501 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Counting existing ResourceQuota +STEP: Creating a ResourceQuota +STEP: Ensuring resource quota status is calculated +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:11:10.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-8501" for this suite. +•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":346,"completed":45,"skipped":799,"failed":0} +SSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl diff + should check if kubectl diff finds a difference for Deployments [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:11:10.096: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-7683 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should check if kubectl diff finds a difference for Deployments [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create deployment with httpd image +Oct 27 14:11:10.296: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-7683 create -f -' +Oct 27 14:11:10.548: INFO: stderr: "" +Oct 27 14:11:10.548: INFO: stdout: "deployment.apps/httpd-deployment created\n" +STEP: verify diff finds difference between live and declared image +Oct 27 14:11:10.548: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-7683 diff -f -' +Oct 27 14:11:10.787: INFO: rc: 1 +Oct 27 14:11:10.787: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-7683 delete -f -' +Oct 27 14:11:10.907: INFO: stderr: "" +Oct 27 14:11:10.907: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:11:10.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-7683" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":346,"completed":46,"skipped":807,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Probing container + should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:11:10.943: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-probe +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-7604 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 +[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating pod busybox-bb6eff0d-1566-462c-8aa7-ad51ad84d538 in namespace container-probe-7604 +Oct 27 14:11:15.200: INFO: Started pod busybox-bb6eff0d-1566-462c-8aa7-ad51ad84d538 in namespace container-probe-7604 +STEP: checking the pod's current state and verifying that restartCount is present +Oct 27 14:11:15.215: INFO: Initial restart count of pod busybox-bb6eff0d-1566-462c-8aa7-ad51ad84d538 is 0 +STEP: deleting the pod +[AfterEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:15:16.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-7604" for this suite. +•{"msg":"PASSED [sig-node] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":346,"completed":47,"skipped":903,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:15:16.927: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-1320 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0644 on node default medium +Oct 27 14:15:17.153: INFO: Waiting up to 5m0s for pod "pod-565d5d25-06a8-47de-b140-cb8467e89d32" in namespace "emptydir-1320" to be "Succeeded or Failed" +Oct 27 14:15:17.165: INFO: Pod "pod-565d5d25-06a8-47de-b140-cb8467e89d32": Phase="Pending", Reason="", readiness=false. Elapsed: 12.17013ms +Oct 27 14:15:19.179: INFO: Pod "pod-565d5d25-06a8-47de-b140-cb8467e89d32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.026176734s +STEP: Saw pod success +Oct 27 14:15:19.179: INFO: Pod "pod-565d5d25-06a8-47de-b140-cb8467e89d32" satisfied condition "Succeeded or Failed" +Oct 27 14:15:19.193: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod pod-565d5d25-06a8-47de-b140-cb8467e89d32 container test-container: +STEP: delete the pod +Oct 27 14:15:19.277: INFO: Waiting for pod pod-565d5d25-06a8-47de-b140-cb8467e89d32 to disappear +Oct 27 14:15:19.289: INFO: Pod pod-565d5d25-06a8-47de-b140-cb8467e89d32 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:15:19.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-1320" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":48,"skipped":930,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-network] DNS + should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:15:19.325: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename dns +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-2104 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a test headless service +STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2104 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2104;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2104 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2104;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2104.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2104.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2104.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2104.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2104.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-2104.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2104.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-2104.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2104.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-2104.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2104.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-2104.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2104.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 164.136.71.100.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/100.71.136.164_udp@PTR;check="$$(dig +tcp +noall +answer +search 164.136.71.100.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/100.71.136.164_tcp@PTR;sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2104 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2104;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2104 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2104;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2104.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2104.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2104.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2104.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2104.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-2104.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2104.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-2104.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2104.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-2104.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2104.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-2104.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2104.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 164.136.71.100.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/100.71.136.164_udp@PTR;check="$$(dig +tcp +noall +answer +search 164.136.71.100.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/100.71.136.164_tcp@PTR;sleep 1; done + +STEP: creating a pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Oct 27 14:15:33.697: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2104/dns-test-8edda77e-986e-4458-82ce-0306569a0eb5: the server could not find the requested resource (get pods dns-test-8edda77e-986e-4458-82ce-0306569a0eb5) +Oct 27 14:15:33.744: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2104/dns-test-8edda77e-986e-4458-82ce-0306569a0eb5: the server could not find the requested resource (get pods dns-test-8edda77e-986e-4458-82ce-0306569a0eb5) +Oct 27 14:15:33.760: INFO: Unable to read wheezy_udp@dns-test-service.dns-2104 from pod dns-2104/dns-test-8edda77e-986e-4458-82ce-0306569a0eb5: the server could not find the requested resource (get pods dns-test-8edda77e-986e-4458-82ce-0306569a0eb5) +Oct 27 14:15:33.781: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2104 from pod dns-2104/dns-test-8edda77e-986e-4458-82ce-0306569a0eb5: the server could not find the requested resource (get pods dns-test-8edda77e-986e-4458-82ce-0306569a0eb5) +Oct 27 14:15:33.796: INFO: Unable to read wheezy_udp@dns-test-service.dns-2104.svc from pod dns-2104/dns-test-8edda77e-986e-4458-82ce-0306569a0eb5: the server could not find the requested resource (get pods dns-test-8edda77e-986e-4458-82ce-0306569a0eb5) +Oct 27 14:15:33.818: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2104.svc from pod dns-2104/dns-test-8edda77e-986e-4458-82ce-0306569a0eb5: the server could not find the requested resource (get pods dns-test-8edda77e-986e-4458-82ce-0306569a0eb5) +Oct 27 14:15:33.833: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2104.svc from pod dns-2104/dns-test-8edda77e-986e-4458-82ce-0306569a0eb5: the server could not find the requested resource (get pods dns-test-8edda77e-986e-4458-82ce-0306569a0eb5) +Oct 27 14:15:33.849: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2104.svc from pod dns-2104/dns-test-8edda77e-986e-4458-82ce-0306569a0eb5: the server could not find the requested resource (get pods dns-test-8edda77e-986e-4458-82ce-0306569a0eb5) +Oct 27 14:15:33.963: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2104/dns-test-8edda77e-986e-4458-82ce-0306569a0eb5: the server could not find the requested resource (get pods dns-test-8edda77e-986e-4458-82ce-0306569a0eb5) +Oct 27 14:15:33.978: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2104/dns-test-8edda77e-986e-4458-82ce-0306569a0eb5: the server could not find the requested resource (get pods dns-test-8edda77e-986e-4458-82ce-0306569a0eb5) +Oct 27 14:15:33.994: INFO: Unable to read jessie_udp@dns-test-service.dns-2104 from pod dns-2104/dns-test-8edda77e-986e-4458-82ce-0306569a0eb5: the server could not find the requested resource (get pods dns-test-8edda77e-986e-4458-82ce-0306569a0eb5) +Oct 27 14:15:34.009: INFO: Unable to read jessie_tcp@dns-test-service.dns-2104 from pod dns-2104/dns-test-8edda77e-986e-4458-82ce-0306569a0eb5: the server could not find the requested resource (get pods dns-test-8edda77e-986e-4458-82ce-0306569a0eb5) +Oct 27 14:15:34.024: INFO: Unable to read jessie_udp@dns-test-service.dns-2104.svc from pod dns-2104/dns-test-8edda77e-986e-4458-82ce-0306569a0eb5: the server could not find the requested resource (get pods dns-test-8edda77e-986e-4458-82ce-0306569a0eb5) +Oct 27 14:15:34.040: INFO: Unable to read jessie_tcp@dns-test-service.dns-2104.svc from pod dns-2104/dns-test-8edda77e-986e-4458-82ce-0306569a0eb5: the server could not find the requested resource (get pods dns-test-8edda77e-986e-4458-82ce-0306569a0eb5) +Oct 27 14:15:34.058: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2104.svc from pod dns-2104/dns-test-8edda77e-986e-4458-82ce-0306569a0eb5: the server could not find the requested resource (get pods dns-test-8edda77e-986e-4458-82ce-0306569a0eb5) +Oct 27 14:15:34.073: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2104.svc from pod dns-2104/dns-test-8edda77e-986e-4458-82ce-0306569a0eb5: the server could not find the requested resource (get pods dns-test-8edda77e-986e-4458-82ce-0306569a0eb5) +Oct 27 14:15:34.173: INFO: Lookups using dns-2104/dns-test-8edda77e-986e-4458-82ce-0306569a0eb5 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2104 wheezy_tcp@dns-test-service.dns-2104 wheezy_udp@dns-test-service.dns-2104.svc wheezy_tcp@dns-test-service.dns-2104.svc wheezy_udp@_http._tcp.dns-test-service.dns-2104.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2104.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2104 jessie_tcp@dns-test-service.dns-2104 jessie_udp@dns-test-service.dns-2104.svc jessie_tcp@dns-test-service.dns-2104.svc jessie_udp@_http._tcp.dns-test-service.dns-2104.svc jessie_tcp@_http._tcp.dns-test-service.dns-2104.svc] + +Oct 27 14:15:39.191: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2104/dns-test-8edda77e-986e-4458-82ce-0306569a0eb5: the server could not find the requested resource (get pods dns-test-8edda77e-986e-4458-82ce-0306569a0eb5) +Oct 27 14:15:39.236: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2104/dns-test-8edda77e-986e-4458-82ce-0306569a0eb5: the server could not find the requested resource (get pods dns-test-8edda77e-986e-4458-82ce-0306569a0eb5) +Oct 27 14:15:39.251: INFO: Unable to read wheezy_udp@dns-test-service.dns-2104 from pod dns-2104/dns-test-8edda77e-986e-4458-82ce-0306569a0eb5: the server could not find the requested resource (get pods dns-test-8edda77e-986e-4458-82ce-0306569a0eb5) +Oct 27 14:15:39.267: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2104 from pod dns-2104/dns-test-8edda77e-986e-4458-82ce-0306569a0eb5: the server could not find the requested resource (get pods dns-test-8edda77e-986e-4458-82ce-0306569a0eb5) +Oct 27 14:15:39.282: INFO: Unable to read wheezy_udp@dns-test-service.dns-2104.svc from pod dns-2104/dns-test-8edda77e-986e-4458-82ce-0306569a0eb5: the server could not find the requested resource (get pods dns-test-8edda77e-986e-4458-82ce-0306569a0eb5) +Oct 27 14:15:39.300: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2104.svc from pod dns-2104/dns-test-8edda77e-986e-4458-82ce-0306569a0eb5: the server could not find the requested resource (get pods dns-test-8edda77e-986e-4458-82ce-0306569a0eb5) +Oct 27 14:15:39.322: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2104.svc from pod dns-2104/dns-test-8edda77e-986e-4458-82ce-0306569a0eb5: the server could not find the requested resource (get pods dns-test-8edda77e-986e-4458-82ce-0306569a0eb5) +Oct 27 14:15:39.337: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2104.svc from pod dns-2104/dns-test-8edda77e-986e-4458-82ce-0306569a0eb5: the server could not find the requested resource (get pods dns-test-8edda77e-986e-4458-82ce-0306569a0eb5) +Oct 27 14:15:39.452: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2104/dns-test-8edda77e-986e-4458-82ce-0306569a0eb5: the server could not find the requested resource (get pods dns-test-8edda77e-986e-4458-82ce-0306569a0eb5) +Oct 27 14:15:39.475: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2104/dns-test-8edda77e-986e-4458-82ce-0306569a0eb5: the server could not find the requested resource (get pods dns-test-8edda77e-986e-4458-82ce-0306569a0eb5) +Oct 27 14:15:39.490: INFO: Unable to read jessie_udp@dns-test-service.dns-2104 from pod dns-2104/dns-test-8edda77e-986e-4458-82ce-0306569a0eb5: the server could not find the requested resource (get pods dns-test-8edda77e-986e-4458-82ce-0306569a0eb5) +Oct 27 14:15:39.506: INFO: Unable to read jessie_tcp@dns-test-service.dns-2104 from pod dns-2104/dns-test-8edda77e-986e-4458-82ce-0306569a0eb5: the server could not find the requested resource (get pods dns-test-8edda77e-986e-4458-82ce-0306569a0eb5) +Oct 27 14:15:39.525: INFO: Unable to read jessie_udp@dns-test-service.dns-2104.svc from pod dns-2104/dns-test-8edda77e-986e-4458-82ce-0306569a0eb5: the server could not find the requested resource (get pods dns-test-8edda77e-986e-4458-82ce-0306569a0eb5) +Oct 27 14:15:39.542: INFO: Unable to read jessie_tcp@dns-test-service.dns-2104.svc from pod dns-2104/dns-test-8edda77e-986e-4458-82ce-0306569a0eb5: the server could not find the requested resource (get pods dns-test-8edda77e-986e-4458-82ce-0306569a0eb5) +Oct 27 14:15:39.560: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2104.svc from pod dns-2104/dns-test-8edda77e-986e-4458-82ce-0306569a0eb5: the server could not find the requested resource (get pods dns-test-8edda77e-986e-4458-82ce-0306569a0eb5) +Oct 27 14:15:39.579: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2104.svc from pod dns-2104/dns-test-8edda77e-986e-4458-82ce-0306569a0eb5: the server could not find the requested resource (get pods dns-test-8edda77e-986e-4458-82ce-0306569a0eb5) +Oct 27 14:15:39.683: INFO: Lookups using dns-2104/dns-test-8edda77e-986e-4458-82ce-0306569a0eb5 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2104 wheezy_tcp@dns-test-service.dns-2104 wheezy_udp@dns-test-service.dns-2104.svc wheezy_tcp@dns-test-service.dns-2104.svc wheezy_udp@_http._tcp.dns-test-service.dns-2104.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2104.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2104 jessie_tcp@dns-test-service.dns-2104 jessie_udp@dns-test-service.dns-2104.svc jessie_tcp@dns-test-service.dns-2104.svc jessie_udp@_http._tcp.dns-test-service.dns-2104.svc jessie_tcp@_http._tcp.dns-test-service.dns-2104.svc] + +Oct 27 14:15:44.192: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2104/dns-test-8edda77e-986e-4458-82ce-0306569a0eb5: the server could not find the requested resource (get pods dns-test-8edda77e-986e-4458-82ce-0306569a0eb5) +Oct 27 14:15:44.210: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2104/dns-test-8edda77e-986e-4458-82ce-0306569a0eb5: the server could not find the requested resource (get pods dns-test-8edda77e-986e-4458-82ce-0306569a0eb5) +Oct 27 14:15:44.231: INFO: Unable to read wheezy_udp@dns-test-service.dns-2104 from pod dns-2104/dns-test-8edda77e-986e-4458-82ce-0306569a0eb5: the server could not find the requested resource (get pods dns-test-8edda77e-986e-4458-82ce-0306569a0eb5) +Oct 27 14:15:44.276: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2104 from pod dns-2104/dns-test-8edda77e-986e-4458-82ce-0306569a0eb5: the server could not find the requested resource (get pods dns-test-8edda77e-986e-4458-82ce-0306569a0eb5) +Oct 27 14:15:44.293: INFO: Unable to read wheezy_udp@dns-test-service.dns-2104.svc from pod dns-2104/dns-test-8edda77e-986e-4458-82ce-0306569a0eb5: the server could not find the requested resource (get pods dns-test-8edda77e-986e-4458-82ce-0306569a0eb5) +Oct 27 14:15:44.310: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2104.svc from pod dns-2104/dns-test-8edda77e-986e-4458-82ce-0306569a0eb5: the server could not find the requested resource (get pods dns-test-8edda77e-986e-4458-82ce-0306569a0eb5) +Oct 27 14:15:44.327: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2104.svc from pod dns-2104/dns-test-8edda77e-986e-4458-82ce-0306569a0eb5: the server could not find the requested resource (get pods dns-test-8edda77e-986e-4458-82ce-0306569a0eb5) +Oct 27 14:15:44.343: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2104.svc from pod dns-2104/dns-test-8edda77e-986e-4458-82ce-0306569a0eb5: the server could not find the requested resource (get pods dns-test-8edda77e-986e-4458-82ce-0306569a0eb5) +Oct 27 14:15:44.462: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2104/dns-test-8edda77e-986e-4458-82ce-0306569a0eb5: the server could not find the requested resource (get pods dns-test-8edda77e-986e-4458-82ce-0306569a0eb5) +Oct 27 14:15:44.480: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2104/dns-test-8edda77e-986e-4458-82ce-0306569a0eb5: the server could not find the requested resource (get pods dns-test-8edda77e-986e-4458-82ce-0306569a0eb5) +Oct 27 14:15:44.496: INFO: Unable to read jessie_udp@dns-test-service.dns-2104 from pod dns-2104/dns-test-8edda77e-986e-4458-82ce-0306569a0eb5: the server could not find the requested resource (get pods dns-test-8edda77e-986e-4458-82ce-0306569a0eb5) +Oct 27 14:15:44.513: INFO: Unable to read jessie_tcp@dns-test-service.dns-2104 from pod dns-2104/dns-test-8edda77e-986e-4458-82ce-0306569a0eb5: the server could not find the requested resource (get pods dns-test-8edda77e-986e-4458-82ce-0306569a0eb5) +Oct 27 14:15:44.538: INFO: Unable to read jessie_udp@dns-test-service.dns-2104.svc from pod dns-2104/dns-test-8edda77e-986e-4458-82ce-0306569a0eb5: the server could not find the requested resource (get pods dns-test-8edda77e-986e-4458-82ce-0306569a0eb5) +Oct 27 14:15:44.553: INFO: Unable to read jessie_tcp@dns-test-service.dns-2104.svc from pod dns-2104/dns-test-8edda77e-986e-4458-82ce-0306569a0eb5: the server could not find the requested resource (get pods dns-test-8edda77e-986e-4458-82ce-0306569a0eb5) +Oct 27 14:15:44.568: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2104.svc from pod dns-2104/dns-test-8edda77e-986e-4458-82ce-0306569a0eb5: the server could not find the requested resource (get pods dns-test-8edda77e-986e-4458-82ce-0306569a0eb5) +Oct 27 14:15:44.584: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2104.svc from pod dns-2104/dns-test-8edda77e-986e-4458-82ce-0306569a0eb5: the server could not find the requested resource (get pods dns-test-8edda77e-986e-4458-82ce-0306569a0eb5) +Oct 27 14:15:44.681: INFO: Lookups using dns-2104/dns-test-8edda77e-986e-4458-82ce-0306569a0eb5 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2104 wheezy_tcp@dns-test-service.dns-2104 wheezy_udp@dns-test-service.dns-2104.svc wheezy_tcp@dns-test-service.dns-2104.svc wheezy_udp@_http._tcp.dns-test-service.dns-2104.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2104.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2104 jessie_tcp@dns-test-service.dns-2104 jessie_udp@dns-test-service.dns-2104.svc jessie_tcp@dns-test-service.dns-2104.svc jessie_udp@_http._tcp.dns-test-service.dns-2104.svc jessie_tcp@_http._tcp.dns-test-service.dns-2104.svc] + +Oct 27 14:15:49.200: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2104/dns-test-8edda77e-986e-4458-82ce-0306569a0eb5: the server could not find the requested resource (get pods dns-test-8edda77e-986e-4458-82ce-0306569a0eb5) +Oct 27 14:15:49.219: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2104/dns-test-8edda77e-986e-4458-82ce-0306569a0eb5: the server could not find the requested resource (get pods dns-test-8edda77e-986e-4458-82ce-0306569a0eb5) +Oct 27 14:15:49.263: INFO: Unable to read wheezy_udp@dns-test-service.dns-2104 from pod dns-2104/dns-test-8edda77e-986e-4458-82ce-0306569a0eb5: the server could not find the requested resource (get pods dns-test-8edda77e-986e-4458-82ce-0306569a0eb5) +Oct 27 14:15:49.280: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2104 from pod dns-2104/dns-test-8edda77e-986e-4458-82ce-0306569a0eb5: the server could not find the requested resource (get pods dns-test-8edda77e-986e-4458-82ce-0306569a0eb5) +Oct 27 14:15:49.297: INFO: Unable to read wheezy_udp@dns-test-service.dns-2104.svc from pod dns-2104/dns-test-8edda77e-986e-4458-82ce-0306569a0eb5: the server could not find the requested resource (get pods dns-test-8edda77e-986e-4458-82ce-0306569a0eb5) +Oct 27 14:15:49.313: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2104.svc from pod dns-2104/dns-test-8edda77e-986e-4458-82ce-0306569a0eb5: the server could not find the requested resource (get pods dns-test-8edda77e-986e-4458-82ce-0306569a0eb5) +Oct 27 14:15:49.329: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2104.svc from pod dns-2104/dns-test-8edda77e-986e-4458-82ce-0306569a0eb5: the server could not find the requested resource (get pods dns-test-8edda77e-986e-4458-82ce-0306569a0eb5) +Oct 27 14:15:49.346: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2104.svc from pod dns-2104/dns-test-8edda77e-986e-4458-82ce-0306569a0eb5: the server could not find the requested resource (get pods dns-test-8edda77e-986e-4458-82ce-0306569a0eb5) +Oct 27 14:15:49.462: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2104/dns-test-8edda77e-986e-4458-82ce-0306569a0eb5: the server could not find the requested resource (get pods dns-test-8edda77e-986e-4458-82ce-0306569a0eb5) +Oct 27 14:15:49.481: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2104/dns-test-8edda77e-986e-4458-82ce-0306569a0eb5: the server could not find the requested resource (get pods dns-test-8edda77e-986e-4458-82ce-0306569a0eb5) +Oct 27 14:15:49.501: INFO: Unable to read jessie_udp@dns-test-service.dns-2104 from pod dns-2104/dns-test-8edda77e-986e-4458-82ce-0306569a0eb5: the server could not find the requested resource (get pods dns-test-8edda77e-986e-4458-82ce-0306569a0eb5) +Oct 27 14:15:49.521: INFO: Unable to read jessie_tcp@dns-test-service.dns-2104 from pod dns-2104/dns-test-8edda77e-986e-4458-82ce-0306569a0eb5: the server could not find the requested resource (get pods dns-test-8edda77e-986e-4458-82ce-0306569a0eb5) +Oct 27 14:15:49.547: INFO: Unable to read jessie_udp@dns-test-service.dns-2104.svc from pod dns-2104/dns-test-8edda77e-986e-4458-82ce-0306569a0eb5: the server could not find the requested resource (get pods dns-test-8edda77e-986e-4458-82ce-0306569a0eb5) +Oct 27 14:15:49.565: INFO: Unable to read jessie_tcp@dns-test-service.dns-2104.svc from pod dns-2104/dns-test-8edda77e-986e-4458-82ce-0306569a0eb5: the server could not find the requested resource (get pods dns-test-8edda77e-986e-4458-82ce-0306569a0eb5) +Oct 27 14:15:49.582: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2104.svc from pod dns-2104/dns-test-8edda77e-986e-4458-82ce-0306569a0eb5: the server could not find the requested resource (get pods dns-test-8edda77e-986e-4458-82ce-0306569a0eb5) +Oct 27 14:15:49.599: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2104.svc from pod dns-2104/dns-test-8edda77e-986e-4458-82ce-0306569a0eb5: the server could not find the requested resource (get pods dns-test-8edda77e-986e-4458-82ce-0306569a0eb5) +Oct 27 14:15:49.711: INFO: Lookups using dns-2104/dns-test-8edda77e-986e-4458-82ce-0306569a0eb5 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2104 wheezy_tcp@dns-test-service.dns-2104 wheezy_udp@dns-test-service.dns-2104.svc wheezy_tcp@dns-test-service.dns-2104.svc wheezy_udp@_http._tcp.dns-test-service.dns-2104.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2104.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2104 jessie_tcp@dns-test-service.dns-2104 jessie_udp@dns-test-service.dns-2104.svc jessie_tcp@dns-test-service.dns-2104.svc jessie_udp@_http._tcp.dns-test-service.dns-2104.svc jessie_tcp@_http._tcp.dns-test-service.dns-2104.svc] + +Oct 27 14:15:54.640: INFO: DNS probes using dns-2104/dns-test-8edda77e-986e-4458-82ce-0306569a0eb5 succeeded + +STEP: deleting the pod +STEP: deleting the test service +STEP: deleting the test headless service +[AfterEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:15:54.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-2104" for this suite. +•{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":346,"completed":49,"skipped":942,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] InitContainer [NodeConformance] + should invoke init containers on a RestartNever pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:15:54.762: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename init-container +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in init-container-2169 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 +[It] should invoke init containers on a RestartNever pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pod +Oct 27 14:15:54.976: INFO: PodSpec: initContainers in spec.initContainers +[AfterEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:15:59.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "init-container-2169" for this suite. +•{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":346,"completed":50,"skipped":967,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Job + should delete a job [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Job + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:15:59.356: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename job +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in job-3422 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should delete a job [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a job +STEP: Ensuring active pods == parallelism +STEP: delete a job +STEP: deleting Job.batch foo in namespace job-3422, will wait for the garbage collector to delete the pods +Oct 27 14:16:03.659: INFO: Deleting Job.batch foo took: 14.551397ms +Oct 27 14:16:03.760: INFO: Terminating Job.batch foo pods took: 100.638233ms +STEP: Ensuring job was deleted +[AfterEach] [sig-apps] Job + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:16:35.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "job-3422" for this suite. +•{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":346,"completed":51,"skipped":1002,"failed":0} +SSS +------------------------------ +[sig-node] Probing container + should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:16:35.307: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-probe +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-7343 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 +[It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating pod liveness-c799c430-5434-40cb-8104-ee87f76f50c6 in namespace container-probe-7343 +Oct 27 14:16:37.561: INFO: Started pod liveness-c799c430-5434-40cb-8104-ee87f76f50c6 in namespace container-probe-7343 +STEP: checking the pod's current state and verifying that restartCount is present +Oct 27 14:16:37.573: INFO: Initial restart count of pod liveness-c799c430-5434-40cb-8104-ee87f76f50c6 is 0 +STEP: deleting the pod +[AfterEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:20:37.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-7343" for this suite. +•{"msg":"PASSED [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":346,"completed":52,"skipped":1005,"failed":0} +SS +------------------------------ +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition + creating/deleting custom resource definition objects works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:20:37.672: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename custom-resource-definition +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in custom-resource-definition-4874 +STEP: Waiting for a default service account to be provisioned in namespace +[It] creating/deleting custom resource definition objects works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:20:37.863: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:20:38.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "custom-resource-definition-4874" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":346,"completed":53,"skipped":1007,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces + should list and delete a collection of PodDisruptionBudgets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:20:38.575: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename disruption +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in disruption-4933 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 +[BeforeEach] Listing PodDisruptionBudgets for all namespaces + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:20:38.802: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename disruption-2 +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in disruption-2-2579 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should list and delete a collection of PodDisruptionBudgets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Waiting for the pdb to be processed +STEP: Waiting for the pdb to be processed +STEP: Waiting for the pdb to be processed +STEP: listing a collection of PDBs across all namespaces +STEP: listing a collection of PDBs in namespace disruption-4933 +STEP: deleting a collection of PDBs +STEP: Waiting for the PDB collection to be deleted +[AfterEach] Listing PodDisruptionBudgets for all namespaces + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:20:39.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "disruption-2-2579" for this suite. +[AfterEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:20:39.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "disruption-4933" for this suite. +•{"msg":"PASSED [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]","total":346,"completed":54,"skipped":1017,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Watchers + should be able to restart watching from the last resource version observed by the previous watch [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:20:39.166: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename watch +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in watch-2423 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a watch on configmaps +STEP: creating a new configmap +STEP: modifying the configmap once +STEP: closing the watch once it receives two notifications +Oct 27 14:20:39.399: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-2423 146edc98-d572-4961-95b0-fa2bfe57c9ce 10381 0 2021-10-27 14:20:39 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-10-27 14:20:39 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 27 14:20:39.399: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-2423 146edc98-d572-4961-95b0-fa2bfe57c9ce 10382 0 2021-10-27 14:20:39 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-10-27 14:20:39 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: modifying the configmap a second time, while the watch is closed +STEP: creating a new watch on configmaps from the last resource version observed by the first watch +STEP: deleting the configmap +STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed +Oct 27 14:20:39.447: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-2423 146edc98-d572-4961-95b0-fa2bfe57c9ce 10383 0 2021-10-27 14:20:39 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-10-27 14:20:39 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 27 14:20:39.447: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-2423 146edc98-d572-4961-95b0-fa2bfe57c9ce 10384 0 2021-10-27 14:20:39 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-10-27 14:20:39 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +[AfterEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:20:39.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "watch-2423" for this suite. +•{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":346,"completed":55,"skipped":1049,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-scheduling] LimitRange + should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-scheduling] LimitRange + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:20:39.473: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename limitrange +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in limitrange-5755 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a LimitRange +STEP: Setting up watch +STEP: Submitting a LimitRange +Oct 27 14:20:39.686: INFO: observed the limitRanges list +STEP: Verifying LimitRange creation was observed +STEP: Fetching the LimitRange to ensure it has proper values +Oct 27 14:20:39.708: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] +Oct 27 14:20:39.708: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] +STEP: Creating a Pod with no resource requirements +STEP: Ensuring Pod has resource requirements applied from LimitRange +Oct 27 14:20:39.756: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] +Oct 27 14:20:39.756: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] +STEP: Creating a Pod with partial resource requirements +STEP: Ensuring Pod has merged resource requirements applied from LimitRange +Oct 27 14:20:39.787: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] +Oct 27 14:20:39.787: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] +STEP: Failing to create a Pod with less than min resources +STEP: Failing to create a Pod with more than max resources +STEP: Updating a LimitRange +STEP: Verifying LimitRange updating is effective +STEP: Creating a Pod with less than former min resources +STEP: Failing to create a Pod with more than max resources +STEP: Deleting a LimitRange +STEP: Verifying the LimitRange was deleted +Oct 27 14:20:46.907: INFO: limitRange is already deleted +STEP: Creating a Pod with more than former max resources +[AfterEach] [sig-scheduling] LimitRange + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:20:46.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "limitrange-5755" for this suite. +•{"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":346,"completed":56,"skipped":1060,"failed":0} +SSSSSSSSS +------------------------------ +[sig-node] PodTemplates + should delete a collection of pod templates [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] PodTemplates + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:20:46.961: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename podtemplate +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in podtemplate-9528 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should delete a collection of pod templates [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Create set of pod templates +Oct 27 14:20:47.162: INFO: created test-podtemplate-1 +Oct 27 14:20:47.174: INFO: created test-podtemplate-2 +Oct 27 14:20:47.186: INFO: created test-podtemplate-3 +STEP: get a list of pod templates with a label in the current namespace +STEP: delete collection of pod templates +Oct 27 14:20:47.198: INFO: requesting DeleteCollection of pod templates +STEP: check that the list of pod templates matches the requested quantity +Oct 27 14:20:47.216: INFO: requesting list of pod templates to confirm quantity +[AfterEach] [sig-node] PodTemplates + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:20:47.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "podtemplate-9528" for this suite. +•{"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":346,"completed":57,"skipped":1069,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:20:47.253: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-3531 +STEP: Waiting for a default service account to be provisioned in namespace +[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir volume type on tmpfs +Oct 27 14:20:47.465: INFO: Waiting up to 5m0s for pod "pod-0a88807a-e56d-470e-95ff-038109ee6b58" in namespace "emptydir-3531" to be "Succeeded or Failed" +Oct 27 14:20:47.476: INFO: Pod "pod-0a88807a-e56d-470e-95ff-038109ee6b58": Phase="Pending", Reason="", readiness=false. Elapsed: 11.224354ms +Oct 27 14:20:49.489: INFO: Pod "pod-0a88807a-e56d-470e-95ff-038109ee6b58": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023990139s +Oct 27 14:20:51.501: INFO: Pod "pod-0a88807a-e56d-470e-95ff-038109ee6b58": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036728024s +STEP: Saw pod success +Oct 27 14:20:51.501: INFO: Pod "pod-0a88807a-e56d-470e-95ff-038109ee6b58" satisfied condition "Succeeded or Failed" +Oct 27 14:20:51.513: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod pod-0a88807a-e56d-470e-95ff-038109ee6b58 container test-container: +STEP: delete the pod +Oct 27 14:20:51.557: INFO: Waiting for pod pod-0a88807a-e56d-470e-95ff-038109ee6b58 to disappear +Oct 27 14:20:51.569: INFO: Pod pod-0a88807a-e56d-470e-95ff-038109ee6b58 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:20:51.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-3531" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":58,"skipped":1085,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should provide secure master service [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:20:51.603: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-9721 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should provide secure master service [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:20:51.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-9721" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":346,"completed":59,"skipped":1121,"failed":0} +SSSS +------------------------------ +[sig-storage] Projected combined + should project all components that make up the projection API [Projection][NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected combined + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:20:51.832: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-893 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name configmap-projected-all-test-volume-7941e560-0666-4dca-baea-9d219004f8bf +STEP: Creating secret with name secret-projected-all-test-volume-2568ac53-cde3-49e4-83f5-75b8c89b9b64 +STEP: Creating a pod to test Check all projections for projected volume plugin +Oct 27 14:20:52.129: INFO: Waiting up to 5m0s for pod "projected-volume-ede16f86-0840-462b-a41e-3c50e3709044" in namespace "projected-893" to be "Succeeded or Failed" +Oct 27 14:20:52.233: INFO: Pod "projected-volume-ede16f86-0840-462b-a41e-3c50e3709044": Phase="Pending", Reason="", readiness=false. Elapsed: 104.613115ms +Oct 27 14:20:54.246: INFO: Pod "projected-volume-ede16f86-0840-462b-a41e-3c50e3709044": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.117197127s +STEP: Saw pod success +Oct 27 14:20:54.246: INFO: Pod "projected-volume-ede16f86-0840-462b-a41e-3c50e3709044" satisfied condition "Succeeded or Failed" +Oct 27 14:20:54.258: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod projected-volume-ede16f86-0840-462b-a41e-3c50e3709044 container projected-all-volume-test: +STEP: delete the pod +Oct 27 14:20:54.296: INFO: Waiting for pod projected-volume-ede16f86-0840-462b-a41e-3c50e3709044 to disappear +Oct 27 14:20:54.308: INFO: Pod projected-volume-ede16f86-0840-462b-a41e-3c50e3709044 no longer exists +[AfterEach] [sig-storage] Projected combined + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:20:54.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-893" for this suite. +•{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":346,"completed":60,"skipped":1125,"failed":0} +SS +------------------------------ +[sig-scheduling] SchedulerPredicates [Serial] + validates that NodeSelector is respected if not matching [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:20:54.343: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename sched-pred +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-pred-2606 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 +Oct 27 14:20:54.538: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready +Oct 27 14:20:54.565: INFO: Waiting for terminating namespaces to be deleted... +Oct 27 14:20:54.577: INFO: +Logging pods the apiserver thinks is on node shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5 before test +Oct 27 14:20:54.611: INFO: addons-nginx-ingress-nginx-ingress-k8s-backend-56d9d84c8c-vv84b from kube-system started at 2021-10-27 13:56:22 +0000 UTC (1 container statuses recorded) +Oct 27 14:20:54.611: INFO: Container nginx-ingress-nginx-ingress-k8s-backend ready: true, restart count 0 +Oct 27 14:20:54.611: INFO: apiserver-proxy-sl296 from kube-system started at 2021-10-27 13:56:02 +0000 UTC (2 container statuses recorded) +Oct 27 14:20:54.611: INFO: Container proxy ready: true, restart count 0 +Oct 27 14:20:54.611: INFO: Container sidecar ready: true, restart count 0 +Oct 27 14:20:54.611: INFO: calico-node-4h2tf from kube-system started at 2021-10-27 13:58:05 +0000 UTC (1 container statuses recorded) +Oct 27 14:20:54.611: INFO: Container calico-node ready: true, restart count 0 +Oct 27 14:20:54.611: INFO: calico-node-vertical-autoscaler-785b5f968-9qxv8 from kube-system started at 2021-10-27 13:56:22 +0000 UTC (1 container statuses recorded) +Oct 27 14:20:54.611: INFO: Container autoscaler ready: true, restart count 0 +Oct 27 14:20:54.611: INFO: calico-typha-horizontal-autoscaler-5b58bb446c-s7nwv from kube-system started at 2021-10-27 13:56:22 +0000 UTC (1 container statuses recorded) +Oct 27 14:20:54.611: INFO: Container autoscaler ready: true, restart count 0 +Oct 27 14:20:54.611: INFO: calico-typha-vertical-autoscaler-5c9655cddd-qxmpq from kube-system started at 2021-10-27 13:56:22 +0000 UTC (1 container statuses recorded) +Oct 27 14:20:54.611: INFO: Container autoscaler ready: true, restart count 0 +Oct 27 14:20:54.611: INFO: coredns-6944b5cf58-cqcmx from kube-system started at 2021-10-27 13:56:22 +0000 UTC (1 container statuses recorded) +Oct 27 14:20:54.611: INFO: Container coredns ready: true, restart count 0 +Oct 27 14:20:54.611: INFO: coredns-6944b5cf58-qwp9p from kube-system started at 2021-10-27 13:56:22 +0000 UTC (1 container statuses recorded) +Oct 27 14:20:54.611: INFO: Container coredns ready: true, restart count 0 +Oct 27 14:20:54.611: INFO: csi-driver-node-l4n7m from kube-system started at 2021-10-27 13:56:02 +0000 UTC (3 container statuses recorded) +Oct 27 14:20:54.611: INFO: Container csi-driver ready: true, restart count 0 +Oct 27 14:20:54.611: INFO: Container csi-liveness-probe ready: true, restart count 0 +Oct 27 14:20:54.611: INFO: Container csi-node-driver-registrar ready: true, restart count 0 +Oct 27 14:20:54.611: INFO: kube-proxy-85xr2 from kube-system started at 2021-10-27 13:59:36 +0000 UTC (2 container statuses recorded) +Oct 27 14:20:54.611: INFO: Container conntrack-fix ready: true, restart count 0 +Oct 27 14:20:54.611: INFO: Container kube-proxy ready: true, restart count 0 +Oct 27 14:20:54.611: INFO: metrics-server-6b8fdcd747-t4xbj from kube-system started at 2021-10-27 13:56:22 +0000 UTC (1 container statuses recorded) +Oct 27 14:20:54.611: INFO: Container metrics-server ready: true, restart count 0 +Oct 27 14:20:54.611: INFO: node-exporter-cwjxv from kube-system started at 2021-10-27 13:56:02 +0000 UTC (1 container statuses recorded) +Oct 27 14:20:54.611: INFO: Container node-exporter ready: true, restart count 0 +Oct 27 14:20:54.611: INFO: node-problem-detector-shkl7 from kube-system started at 2021-10-27 13:56:02 +0000 UTC (1 container statuses recorded) +Oct 27 14:20:54.611: INFO: Container node-problem-detector ready: true, restart count 0 +Oct 27 14:20:54.611: INFO: vpn-shoot-77b49d5987-8ddn6 from kube-system started at 2021-10-27 13:56:22 +0000 UTC (1 container statuses recorded) +Oct 27 14:20:54.611: INFO: Container vpn-shoot ready: true, restart count 0 +Oct 27 14:20:54.611: INFO: dashboard-metrics-scraper-7ccbfc448f-l8nhq from kubernetes-dashboard started at 2021-10-27 13:56:22 +0000 UTC (1 container statuses recorded) +Oct 27 14:20:54.611: INFO: Container dashboard-metrics-scraper ready: true, restart count 0 +Oct 27 14:20:54.611: INFO: kubernetes-dashboard-7888b55b49-xptfd from kubernetes-dashboard started at 2021-10-27 13:56:22 +0000 UTC (1 container statuses recorded) +Oct 27 14:20:54.611: INFO: Container kubernetes-dashboard ready: true, restart count 2 +Oct 27 14:20:54.611: INFO: +Logging pods the apiserver thinks is on node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc before test +Oct 27 14:20:54.630: INFO: addons-nginx-ingress-controller-d5756fc97-fcls5 from kube-system started at 2021-10-27 14:02:35 +0000 UTC (1 container statuses recorded) +Oct 27 14:20:54.630: INFO: Container nginx-ingress-controller ready: true, restart count 0 +Oct 27 14:20:54.630: INFO: apiserver-proxy-z9z6b from kube-system started at 2021-10-27 13:56:05 +0000 UTC (2 container statuses recorded) +Oct 27 14:20:54.630: INFO: Container proxy ready: true, restart count 0 +Oct 27 14:20:54.630: INFO: Container sidecar ready: true, restart count 0 +Oct 27 14:20:54.630: INFO: blackbox-exporter-65c549b94c-rjgf7 from kube-system started at 2021-10-27 14:03:35 +0000 UTC (1 container statuses recorded) +Oct 27 14:20:54.630: INFO: Container blackbox-exporter ready: true, restart count 0 +Oct 27 14:20:54.630: INFO: calico-kube-controllers-56bcbfb5c5-f9t75 from kube-system started at 2021-10-27 13:56:06 +0000 UTC (1 container statuses recorded) +Oct 27 14:20:54.630: INFO: Container calico-kube-controllers ready: true, restart count 0 +Oct 27 14:20:54.630: INFO: calico-node-7gp7f from kube-system started at 2021-10-27 13:56:05 +0000 UTC (1 container statuses recorded) +Oct 27 14:20:54.630: INFO: Container calico-node ready: true, restart count 0 +Oct 27 14:20:54.630: INFO: calico-typha-deploy-546b97d4b5-z8pql from kube-system started at 2021-10-27 13:56:06 +0000 UTC (1 container statuses recorded) +Oct 27 14:20:54.630: INFO: Container calico-typha ready: true, restart count 0 +Oct 27 14:20:54.630: INFO: csi-driver-node-4sm4p from kube-system started at 2021-10-27 13:56:05 +0000 UTC (3 container statuses recorded) +Oct 27 14:20:54.630: INFO: Container csi-driver ready: true, restart count 0 +Oct 27 14:20:54.630: INFO: Container csi-liveness-probe ready: true, restart count 0 +Oct 27 14:20:54.630: INFO: Container csi-node-driver-registrar ready: true, restart count 0 +Oct 27 14:20:54.630: INFO: kube-proxy-j2k28 from kube-system started at 2021-10-27 13:59:36 +0000 UTC (2 container statuses recorded) +Oct 27 14:20:54.630: INFO: Container conntrack-fix ready: true, restart count 0 +Oct 27 14:20:54.630: INFO: Container kube-proxy ready: true, restart count 0 +Oct 27 14:20:54.630: INFO: node-exporter-zsjq5 from kube-system started at 2021-10-27 13:56:05 +0000 UTC (1 container statuses recorded) +Oct 27 14:20:54.630: INFO: Container node-exporter ready: true, restart count 0 +Oct 27 14:20:54.630: INFO: node-problem-detector-nwtmj from kube-system started at 2021-10-27 13:56:05 +0000 UTC (1 container statuses recorded) +Oct 27 14:20:54.630: INFO: Container node-problem-detector ready: true, restart count 0 +[It] validates that NodeSelector is respected if not matching [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Trying to schedule Pod with nonempty NodeSelector. +STEP: Considering event: +Type = [Warning], Name = [restricted-pod.16b1e91dfb36f937], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match Pod's node affinity/selector.] +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:20:55.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-pred-2606" for this suite. +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 +•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":346,"completed":61,"skipped":1127,"failed":0} +SSSSSSSSS +------------------------------ +[sig-node] ConfigMap + should be consumable via the environment [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:20:55.745: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-9399 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable via the environment [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap configmap-9399/configmap-test-5fd7ad6c-0b3f-4bd4-b799-0c4d749e6fac +STEP: Creating a pod to test consume configMaps +Oct 27 14:20:55.970: INFO: Waiting up to 5m0s for pod "pod-configmaps-11a711f3-345b-41b8-9cba-d8751fed6076" in namespace "configmap-9399" to be "Succeeded or Failed" +Oct 27 14:20:55.982: INFO: Pod "pod-configmaps-11a711f3-345b-41b8-9cba-d8751fed6076": Phase="Pending", Reason="", readiness=false. Elapsed: 11.691026ms +Oct 27 14:20:57.995: INFO: Pod "pod-configmaps-11a711f3-345b-41b8-9cba-d8751fed6076": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024923049s +Oct 27 14:21:00.008: INFO: Pod "pod-configmaps-11a711f3-345b-41b8-9cba-d8751fed6076": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038122734s +STEP: Saw pod success +Oct 27 14:21:00.008: INFO: Pod "pod-configmaps-11a711f3-345b-41b8-9cba-d8751fed6076" satisfied condition "Succeeded or Failed" +Oct 27 14:21:00.021: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod pod-configmaps-11a711f3-345b-41b8-9cba-d8751fed6076 container env-test: +STEP: delete the pod +Oct 27 14:21:00.065: INFO: Waiting for pod pod-configmaps-11a711f3-345b-41b8-9cba-d8751fed6076 to disappear +Oct 27 14:21:00.077: INFO: Pod pod-configmaps-11a711f3-345b-41b8-9cba-d8751fed6076 no longer exists +[AfterEach] [sig-node] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:21:00.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-9399" for this suite. +•{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":346,"completed":62,"skipped":1136,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl api-versions + should check if v1 is in available api versions [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:21:00.112: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-1565 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should check if v1 is in available api versions [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: validating api versions +Oct 27 14:21:00.304: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-1565 api-versions' +Oct 27 14:21:00.433: INFO: stderr: "" +Oct 27 14:21:00.433: INFO: stdout: "admissionregistration.k8s.io/v1\napiextensions.k8s.io/v1\napiregistration.k8s.io/v1\napps/v1\nauthentication.k8s.io/v1\nauthorization.k8s.io/v1\nautoscaling.k8s.io/v1\nautoscaling.k8s.io/v1beta2\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncert.gardener.cloud/v1alpha1\ncertificates.k8s.io/v1\ncoordination.k8s.io/v1\ncrd.projectcalico.org/v1\ndiscovery.k8s.io/v1\ndiscovery.k8s.io/v1beta1\ndns.gardener.cloud/v1alpha1\nevents.k8s.io/v1\nevents.k8s.io/v1beta1\nflowcontrol.apiserver.k8s.io/v1beta1\nmetrics.k8s.io/v1beta1\nnetworking.k8s.io/v1\nnode.k8s.io/v1\nnode.k8s.io/v1beta1\npolicy/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nscheduling.k8s.io/v1\nsnapshot.storage.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:21:00.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-1565" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":346,"completed":63,"skipped":1171,"failed":0} +SSSSSSSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + should perform rolling updates and roll backs of template modifications [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:21:00.459: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename statefulset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-3169 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:92 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:107 +STEP: Creating service test in namespace statefulset-3169 +[It] should perform rolling updates and roll backs of template modifications [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a new StatefulSet +Oct 27 14:21:00.691: INFO: Found 0 stateful pods, waiting for 3 +Oct 27 14:21:10.707: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true +Oct 27 14:21:10.707: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true +Oct 27 14:21:10.707: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true +Oct 27 14:21:10.742: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-3169 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Oct 27 14:21:11.407: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Oct 27 14:21:11.407: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Oct 27 14:21:11.407: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +STEP: Updating StatefulSet template: update image from k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 to k8s.gcr.io/e2e-test-images/httpd:2.4.39-1 +Oct 27 14:21:21.494: INFO: Updating stateful set ss2 +STEP: Creating a new revision +STEP: Updating Pods in reverse ordinal order +Oct 27 14:21:21.530: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-3169 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 14:21:21.905: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Oct 27 14:21:21.905: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Oct 27 14:21:21.905: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Oct 27 14:21:31.977: INFO: Waiting for StatefulSet statefulset-3169/ss2 to complete update +Oct 27 14:21:31.977: INFO: Waiting for Pod statefulset-3169/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 +Oct 27 14:21:31.977: INFO: Waiting for Pod statefulset-3169/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 +Oct 27 14:21:42.004: INFO: Waiting for StatefulSet statefulset-3169/ss2 to complete update +Oct 27 14:21:42.004: INFO: Waiting for Pod statefulset-3169/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 +STEP: Rolling back to a previous revision +Oct 27 14:21:52.003: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-3169 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Oct 27 14:22:13.182: INFO: rc: 1 +Oct 27 14:22:13.182: INFO: Waiting 10s to retry failed RunHostCmd: error running /go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-3169 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true: +Command stdout: + +stderr: +Error from server: error dialing backend: proxy error from vpn-seed-server:9443 while dialing 10.250.0.2:10250, code 503: 503 Service Unavailable + +error: +exit status 1 +Oct 27 14:22:23.184: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-3169 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Oct 27 14:22:23.520: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Oct 27 14:22:23.520: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Oct 27 14:22:23.520: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Oct 27 14:22:33.607: INFO: Updating stateful set ss2 +STEP: Rolling back update in reverse ordinal order +Oct 27 14:22:43.670: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-3169 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 14:22:44.062: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Oct 27 14:22:44.063: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Oct 27 14:22:44.063: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Oct 27 14:22:54.135: INFO: Waiting for StatefulSet statefulset-3169/ss2 to complete update +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118 +Oct 27 14:23:04.163: INFO: Deleting all statefulset in ns statefulset-3169 +Oct 27 14:23:04.174: INFO: Scaling statefulset ss2 to 0 +Oct 27 14:23:14.226: INFO: Waiting for statefulset status.replicas updated to 0 +Oct 27 14:23:14.237: INFO: Deleting statefulset ss2 +[AfterEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:23:14.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-3169" for this suite. +•{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":346,"completed":64,"skipped":1180,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide container's memory limit [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:23:14.331: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-8187 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should provide container's memory limit [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 27 14:23:15.229: INFO: Waiting up to 5m0s for pod "downwardapi-volume-33789510-8c73-4740-a02a-2c53bd11ae8e" in namespace "projected-8187" to be "Succeeded or Failed" +Oct 27 14:23:15.241: INFO: Pod "downwardapi-volume-33789510-8c73-4740-a02a-2c53bd11ae8e": Phase="Pending", Reason="", readiness=false. Elapsed: 11.770628ms +Oct 27 14:23:17.254: INFO: Pod "downwardapi-volume-33789510-8c73-4740-a02a-2c53bd11ae8e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.024750814s +STEP: Saw pod success +Oct 27 14:23:17.254: INFO: Pod "downwardapi-volume-33789510-8c73-4740-a02a-2c53bd11ae8e" satisfied condition "Succeeded or Failed" +Oct 27 14:23:17.265: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod downwardapi-volume-33789510-8c73-4740-a02a-2c53bd11ae8e container client-container: +STEP: delete the pod +Oct 27 14:23:17.302: INFO: Waiting for pod downwardapi-volume-33789510-8c73-4740-a02a-2c53bd11ae8e to disappear +Oct 27 14:23:17.313: INFO: Pod downwardapi-volume-33789510-8c73-4740-a02a-2c53bd11ae8e no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:23:17.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-8187" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":346,"completed":65,"skipped":1214,"failed":0} +S +------------------------------ +[sig-api-machinery] ResourceQuota + should verify ResourceQuota with terminating scopes. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:23:17.347: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename resourcequota +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-6499 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should verify ResourceQuota with terminating scopes. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a ResourceQuota with terminating scope +STEP: Ensuring ResourceQuota status is calculated +STEP: Creating a ResourceQuota with not terminating scope +STEP: Ensuring ResourceQuota status is calculated +STEP: Creating a long running pod +STEP: Ensuring resource quota with not terminating scope captures the pod usage +STEP: Ensuring resource quota with terminating scope ignored the pod usage +STEP: Deleting the pod +STEP: Ensuring resource quota status released the pod usage +STEP: Creating a terminating pod +STEP: Ensuring resource quota with terminating scope captures the pod usage +STEP: Ensuring resource quota with not terminating scope ignored the pod usage +STEP: Deleting the pod +STEP: Ensuring resource quota status released the pod usage +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:23:33.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-6499" for this suite. +•{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":346,"completed":66,"skipped":1215,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should mutate pod and apply defaults after mutation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:23:33.773: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-8130 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 27 14:23:34.431: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941414, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941414, loc:(*time.Location)(0xa09bc80)}}, Reason:"NewReplicaSetCreated", Message:"Created new replica set \"sample-webhook-deployment-78988fc6cd\""}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941414, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941414, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 27 14:23:37.463: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should mutate pod and apply defaults after mutation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Registering the mutating pod webhook via the AdmissionRegistration API +STEP: create a pod that should be updated by the webhook +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:23:37.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-8130" for this suite. +STEP: Destroying namespace "webhook-8130-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":346,"completed":67,"skipped":1269,"failed":0} +SSS +------------------------------ +[sig-node] Downward API + should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:23:37.791: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-502 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward api env vars +Oct 27 14:23:38.017: INFO: Waiting up to 5m0s for pod "downward-api-25653252-9e07-44b5-8395-8a2bb140e86e" in namespace "downward-api-502" to be "Succeeded or Failed" +Oct 27 14:23:38.028: INFO: Pod "downward-api-25653252-9e07-44b5-8395-8a2bb140e86e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.773132ms +Oct 27 14:23:40.042: INFO: Pod "downward-api-25653252-9e07-44b5-8395-8a2bb140e86e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.024243194s +STEP: Saw pod success +Oct 27 14:23:40.042: INFO: Pod "downward-api-25653252-9e07-44b5-8395-8a2bb140e86e" satisfied condition "Succeeded or Failed" +Oct 27 14:23:40.054: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod downward-api-25653252-9e07-44b5-8395-8a2bb140e86e container dapi-container: +STEP: delete the pod +Oct 27 14:23:40.095: INFO: Waiting for pod downward-api-25653252-9e07-44b5-8395-8a2bb140e86e to disappear +Oct 27 14:23:40.106: INFO: Pod downward-api-25653252-9e07-44b5-8395-8a2bb140e86e no longer exists +[AfterEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:23:40.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-502" for this suite. +•{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":346,"completed":68,"skipped":1272,"failed":0} + +------------------------------ +[sig-node] Variable Expansion + should fail substituting values in a volume subpath with backticks [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:23:40.140: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename var-expansion +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in var-expansion-6328 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should fail substituting values in a volume subpath with backticks [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:23:44.378: INFO: Deleting pod "var-expansion-70feac12-05ad-41ba-9cb7-fe2b8c03de3c" in namespace "var-expansion-6328" +Oct 27 14:23:44.394: INFO: Wait up to 5m0s for pod "var-expansion-70feac12-05ad-41ba-9cb7-fe2b8c03de3c" to be fully deleted +[AfterEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:23:46.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-6328" for this suite. +•{"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]","total":346,"completed":69,"skipped":1272,"failed":0} +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-instrumentation] Events API + should delete a collection of events [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-instrumentation] Events API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:23:46.452: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename events +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in events-627 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-instrumentation] Events API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 +[It] should delete a collection of events [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Create set of events +STEP: get a list of Events with a label in the current namespace +STEP: delete a list of events +Oct 27 14:23:46.698: INFO: requesting DeleteCollection of events +STEP: check that the list of events matches the requested quantity +[AfterEach] [sig-instrumentation] Events API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:23:46.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "events-627" for this suite. +•{"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":346,"completed":70,"skipped":1289,"failed":0} +SSSSSS +------------------------------ +[sig-network] IngressClass API + should support creating IngressClass API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] IngressClass API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:23:46.758: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename ingressclass +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in ingressclass-377 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] IngressClass API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingressclass.go:148 +[It] should support creating IngressClass API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: getting /apis +STEP: getting /apis/networking.k8s.io +STEP: getting /apis/networking.k8s.iov1 +STEP: creating +STEP: getting +STEP: listing +STEP: watching +Oct 27 14:23:47.048: INFO: starting watch +STEP: patching +STEP: updating +Oct 27 14:23:47.082: INFO: waiting for watch events with expected annotations +Oct 27 14:23:47.082: INFO: saw patched and updated annotations +STEP: deleting +STEP: deleting a collection +[AfterEach] [sig-network] IngressClass API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:23:47.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "ingressclass-377" for this suite. +•{"msg":"PASSED [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]","total":346,"completed":71,"skipped":1295,"failed":0} +SS +------------------------------ +[sig-cli] Kubectl client Kubectl run pod + should create a pod from an image when restart is Never [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:23:47.172: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-2561 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[BeforeEach] Kubectl run pod + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1524 +[It] should create a pod from an image when restart is Never [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 +Oct 27 14:23:47.358: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-2561 run e2e-test-httpd-pod --restart=Never --pod-running-timeout=2m0s --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-1' +Oct 27 14:23:47.477: INFO: stderr: "" +Oct 27 14:23:47.477: INFO: stdout: "pod/e2e-test-httpd-pod created\n" +STEP: verifying the pod e2e-test-httpd-pod was created +[AfterEach] Kubectl run pod + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1528 +Oct 27 14:23:47.489: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-2561 delete pods e2e-test-httpd-pod' +Oct 27 14:23:50.667: INFO: stderr: "" +Oct 27 14:23:50.667: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:23:50.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-2561" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":346,"completed":72,"skipped":1297,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide podname only [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:23:50.701: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-284 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should provide podname only [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 27 14:23:50.923: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dc929ecb-bab0-4c41-a847-5a009911ac2a" in namespace "projected-284" to be "Succeeded or Failed" +Oct 27 14:23:50.936: INFO: Pod "downwardapi-volume-dc929ecb-bab0-4c41-a847-5a009911ac2a": Phase="Pending", Reason="", readiness=false. Elapsed: 12.124053ms +Oct 27 14:23:52.948: INFO: Pod "downwardapi-volume-dc929ecb-bab0-4c41-a847-5a009911ac2a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02483346s +Oct 27 14:23:54.960: INFO: Pod "downwardapi-volume-dc929ecb-bab0-4c41-a847-5a009911ac2a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036944105s +STEP: Saw pod success +Oct 27 14:23:54.960: INFO: Pod "downwardapi-volume-dc929ecb-bab0-4c41-a847-5a009911ac2a" satisfied condition "Succeeded or Failed" +Oct 27 14:23:54.972: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod downwardapi-volume-dc929ecb-bab0-4c41-a847-5a009911ac2a container client-container: +STEP: delete the pod +Oct 27 14:23:55.047: INFO: Waiting for pod downwardapi-volume-dc929ecb-bab0-4c41-a847-5a009911ac2a to disappear +Oct 27 14:23:55.058: INFO: Pod downwardapi-volume-dc929ecb-bab0-4c41-a847-5a009911ac2a no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:23:55.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-284" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":346,"completed":73,"skipped":1323,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints + verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:23:55.091: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename sched-preemption +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-preemption-4536 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 +Oct 27 14:23:55.312: INFO: Waiting up to 1m0s for all nodes to be ready +Oct 27 14:24:55.419: INFO: Waiting for terminating namespaces to be deleted... +[BeforeEach] PriorityClass endpoints + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:24:55.431: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename sched-preemption-path +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-preemption-path-9469 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] PriorityClass endpoints + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:679 +[It] verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:24:55.658: INFO: PriorityClass.scheduling.k8s.io "p1" is invalid: Value: Forbidden: may not be changed in an update. +Oct 27 14:24:55.670: INFO: PriorityClass.scheduling.k8s.io "p2" is invalid: Value: Forbidden: may not be changed in an update. +[AfterEach] PriorityClass endpoints + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:24:55.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-preemption-path-9469" for this suite. +[AfterEach] PriorityClass endpoints + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:693 +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:24:55.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-preemption-4536" for this suite. +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 +•{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]","total":346,"completed":74,"skipped":1350,"failed":0} + +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should be able to deny custom resource creation, update and deletion [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:24:55.870: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-9506 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 27 14:24:56.831: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941496, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941496, loc:(*time.Location)(0xa09bc80)}}, Reason:"NewReplicaSetCreated", Message:"Created new replica set \"sample-webhook-deployment-78988fc6cd\""}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941496, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941496, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} +Oct 27 14:24:58.843: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941496, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941496, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941496, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941496, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 27 14:25:01.863: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should be able to deny custom resource creation, update and deletion [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:25:01.875: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Registering the custom resource webhook via the AdmissionRegistration API +STEP: Creating a custom resource that should be denied by the webhook +STEP: Creating a custom resource whose deletion would be denied by the webhook +STEP: Updating the custom resource with disallowed data should be denied +STEP: Deleting the custom resource should be denied +STEP: Remove the offending key and value from the custom resource data +STEP: Deleting the updated custom resource should be successful +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:25:04.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-9506" for this suite. +STEP: Destroying namespace "webhook-9506-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":346,"completed":75,"skipped":1350,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-node] Secrets + should fail to create secret due to empty secret key [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:25:05.033: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-4508 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should fail to create secret due to empty secret key [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating projection with secret that has name secret-emptykey-test-862da851-fec3-48b3-9d43-19d85c6f515f +[AfterEach] [sig-node] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:25:05.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-4508" for this suite. +•{"msg":"PASSED [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]","total":346,"completed":76,"skipped":1362,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should be able to change the type from ExternalName to ClusterIP [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:25:05.257: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-7493 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should be able to change the type from ExternalName to ClusterIP [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a service externalname-service with the type=ExternalName in namespace services-7493 +STEP: changing the ExternalName service to type=ClusterIP +STEP: creating replication controller externalname-service in namespace services-7493 +I1027 14:25:05.498415 5683 runners.go:190] Created replication controller with name: externalname-service, namespace: services-7493, replica count: 2 +Oct 27 14:25:08.550: INFO: Creating new exec pod +I1027 14:25:08.550105 5683 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Oct 27 14:25:13.593: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-7493 exec execpod6x878 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' +Oct 27 14:25:13.985: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" +Oct 27 14:25:13.985: INFO: stdout: "" +Oct 27 14:25:14.985: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-7493 exec execpod6x878 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' +Oct 27 14:25:15.445: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" +Oct 27 14:25:15.445: INFO: stdout: "" +Oct 27 14:25:15.986: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-7493 exec execpod6x878 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' +Oct 27 14:25:16.432: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" +Oct 27 14:25:16.432: INFO: stdout: "" +Oct 27 14:25:16.986: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-7493 exec execpod6x878 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' +Oct 27 14:25:17.369: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" +Oct 27 14:25:17.369: INFO: stdout: "externalname-service-9dtfw" +Oct 27 14:25:17.369: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-7493 exec execpod6x878 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.69.124.232 80' +Oct 27 14:25:17.749: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 100.69.124.232 80\nConnection to 100.69.124.232 80 port [tcp/http] succeeded!\n" +Oct 27 14:25:17.749: INFO: stdout: "externalname-service-9dtfw" +Oct 27 14:25:17.749: INFO: Cleaning up the ExternalName to ClusterIP test service +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:25:17.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-7493" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":346,"completed":77,"skipped":1413,"failed":0} + +------------------------------ +[sig-storage] Downward API volume + should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:25:17.804: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-487 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 27 14:25:18.015: INFO: Waiting up to 5m0s for pod "downwardapi-volume-952258bc-5b50-4197-8fc0-1b803d7d75d0" in namespace "downward-api-487" to be "Succeeded or Failed" +Oct 27 14:25:18.026: INFO: Pod "downwardapi-volume-952258bc-5b50-4197-8fc0-1b803d7d75d0": Phase="Pending", Reason="", readiness=false. Elapsed: 11.055732ms +Oct 27 14:25:20.039: INFO: Pod "downwardapi-volume-952258bc-5b50-4197-8fc0-1b803d7d75d0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024323545s +Oct 27 14:25:22.052: INFO: Pod "downwardapi-volume-952258bc-5b50-4197-8fc0-1b803d7d75d0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037440213s +STEP: Saw pod success +Oct 27 14:25:22.052: INFO: Pod "downwardapi-volume-952258bc-5b50-4197-8fc0-1b803d7d75d0" satisfied condition "Succeeded or Failed" +Oct 27 14:25:22.063: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod downwardapi-volume-952258bc-5b50-4197-8fc0-1b803d7d75d0 container client-container: +STEP: delete the pod +Oct 27 14:25:22.140: INFO: Waiting for pod downwardapi-volume-952258bc-5b50-4197-8fc0-1b803d7d75d0 to disappear +Oct 27 14:25:22.151: INFO: Pod downwardapi-volume-952258bc-5b50-4197-8fc0-1b803d7d75d0 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:25:22.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-487" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":346,"completed":78,"skipped":1413,"failed":0} +SSS +------------------------------ +[sig-apps] ReplicaSet + Replace and Patch tests [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:25:22.185: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename replicaset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replicaset-5066 +STEP: Waiting for a default service account to be provisioned in namespace +[It] Replace and Patch tests [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:25:22.414: INFO: Pod name sample-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running +STEP: Scaling up "test-rs" replicaset +Oct 27 14:25:26.462: INFO: Updating replica set "test-rs" +STEP: patching the ReplicaSet +Oct 27 14:25:26.500: INFO: observed ReplicaSet test-rs in namespace replicaset-5066 with ReadyReplicas 1, AvailableReplicas 1 +Oct 27 14:25:26.539: INFO: observed ReplicaSet test-rs in namespace replicaset-5066 with ReadyReplicas 1, AvailableReplicas 1 +Oct 27 14:25:26.548: INFO: observed ReplicaSet test-rs in namespace replicaset-5066 with ReadyReplicas 1, AvailableReplicas 1 +Oct 27 14:25:28.535: INFO: observed ReplicaSet test-rs in namespace replicaset-5066 with ReadyReplicas 2, AvailableReplicas 2 +Oct 27 14:25:28.848: INFO: observed Replicaset test-rs in namespace replicaset-5066 with ReadyReplicas 3 found true +[AfterEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:25:28.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replicaset-5066" for this suite. +•{"msg":"PASSED [sig-apps] ReplicaSet Replace and Patch tests [Conformance]","total":346,"completed":79,"skipped":1416,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook + should execute poststart exec hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Container Lifecycle Hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:25:28.882: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-lifecycle-hook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-lifecycle-hook-5497 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] when create a pod with lifecycle hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 +STEP: create the container to handle the HTTPGet hook request. +Oct 27 14:25:29.105: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:25:31.118: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) +[It] should execute poststart exec hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the pod with lifecycle hook +Oct 27 14:25:31.158: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:25:33.171: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = true) +STEP: check poststart hook +STEP: delete the pod with lifecycle hook +Oct 27 14:25:33.221: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Oct 27 14:25:33.233: INFO: Pod pod-with-poststart-exec-hook still exists +Oct 27 14:25:35.234: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Oct 27 14:25:35.246: INFO: Pod pod-with-poststart-exec-hook still exists +Oct 27 14:25:37.234: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Oct 27 14:25:37.246: INFO: Pod pod-with-poststart-exec-hook no longer exists +[AfterEach] [sig-node] Container Lifecycle Hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:25:37.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-lifecycle-hook-5497" for this suite. +•{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":346,"completed":80,"skipped":1519,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:25:37.281: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename subpath +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in subpath-2286 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] Atomic writer volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 +STEP: Setting up data +[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating pod pod-subpath-test-configmap-hzrd +STEP: Creating a pod to test atomic-volume-subpath +Oct 27 14:25:37.514: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-hzrd" in namespace "subpath-2286" to be "Succeeded or Failed" +Oct 27 14:25:37.525: INFO: Pod "pod-subpath-test-configmap-hzrd": Phase="Pending", Reason="", readiness=false. Elapsed: 11.049245ms +Oct 27 14:25:39.537: INFO: Pod "pod-subpath-test-configmap-hzrd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022661123s +Oct 27 14:25:41.549: INFO: Pod "pod-subpath-test-configmap-hzrd": Phase="Running", Reason="", readiness=true. Elapsed: 4.034784433s +Oct 27 14:25:43.562: INFO: Pod "pod-subpath-test-configmap-hzrd": Phase="Running", Reason="", readiness=true. Elapsed: 6.048170766s +Oct 27 14:25:45.574: INFO: Pod "pod-subpath-test-configmap-hzrd": Phase="Running", Reason="", readiness=true. Elapsed: 8.060026429s +Oct 27 14:25:47.586: INFO: Pod "pod-subpath-test-configmap-hzrd": Phase="Running", Reason="", readiness=true. Elapsed: 10.071735395s +Oct 27 14:25:49.599: INFO: Pod "pod-subpath-test-configmap-hzrd": Phase="Running", Reason="", readiness=true. Elapsed: 12.084571737s +Oct 27 14:25:51.613: INFO: Pod "pod-subpath-test-configmap-hzrd": Phase="Running", Reason="", readiness=true. Elapsed: 14.098413411s +Oct 27 14:25:53.625: INFO: Pod "pod-subpath-test-configmap-hzrd": Phase="Running", Reason="", readiness=true. Elapsed: 16.110760133s +Oct 27 14:25:55.638: INFO: Pod "pod-subpath-test-configmap-hzrd": Phase="Running", Reason="", readiness=true. Elapsed: 18.123825636s +Oct 27 14:25:57.651: INFO: Pod "pod-subpath-test-configmap-hzrd": Phase="Running", Reason="", readiness=true. Elapsed: 20.136267667s +Oct 27 14:25:59.662: INFO: Pod "pod-subpath-test-configmap-hzrd": Phase="Running", Reason="", readiness=true. Elapsed: 22.148147679s +Oct 27 14:26:01.676: INFO: Pod "pod-subpath-test-configmap-hzrd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.161697312s +STEP: Saw pod success +Oct 27 14:26:01.676: INFO: Pod "pod-subpath-test-configmap-hzrd" satisfied condition "Succeeded or Failed" +Oct 27 14:26:01.687: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod pod-subpath-test-configmap-hzrd container test-container-subpath-configmap-hzrd: +STEP: delete the pod +Oct 27 14:26:01.730: INFO: Waiting for pod pod-subpath-test-configmap-hzrd to disappear +Oct 27 14:26:01.747: INFO: Pod pod-subpath-test-configmap-hzrd no longer exists +STEP: Deleting pod pod-subpath-test-configmap-hzrd +Oct 27 14:26:01.747: INFO: Deleting pod "pod-subpath-test-configmap-hzrd" in namespace "subpath-2286" +[AfterEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:26:01.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "subpath-2286" for this suite. +•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":346,"completed":81,"skipped":1619,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + should perform canary updates and phased rolling updates of template modifications [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:26:01.792: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename statefulset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-2439 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:92 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:107 +STEP: Creating service test in namespace statefulset-2439 +[It] should perform canary updates and phased rolling updates of template modifications [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a new StatefulSet +Oct 27 14:26:02.022: INFO: Found 0 stateful pods, waiting for 3 +Oct 27 14:26:12.034: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true +Oct 27 14:26:12.034: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true +Oct 27 14:26:12.034: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true +STEP: Updating stateful set template: update image from k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 to k8s.gcr.io/e2e-test-images/httpd:2.4.39-1 +Oct 27 14:26:12.104: INFO: Updating stateful set ss2 +STEP: Creating a new revision +STEP: Not applying an update when the partition is greater than the number of replicas +STEP: Performing a canary update +Oct 27 14:26:12.163: INFO: Updating stateful set ss2 +Oct 27 14:26:12.187: INFO: Waiting for Pod statefulset-2439/ss2-2 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 +STEP: Restoring Pods to the correct revision when they are deleted +Oct 27 14:26:22.257: INFO: Found 2 stateful pods, waiting for 3 +Oct 27 14:26:32.270: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true +Oct 27 14:26:32.271: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true +Oct 27 14:26:32.271: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true +STEP: Performing a phased rolling update +Oct 27 14:26:32.330: INFO: Updating stateful set ss2 +Oct 27 14:26:32.354: INFO: Waiting for Pod statefulset-2439/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 +Oct 27 14:26:42.420: INFO: Updating stateful set ss2 +Oct 27 14:26:42.456: INFO: Waiting for StatefulSet statefulset-2439/ss2 to complete update +Oct 27 14:26:42.456: INFO: Waiting for Pod statefulset-2439/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118 +Oct 27 14:26:52.481: INFO: Deleting all statefulset in ns statefulset-2439 +Oct 27 14:26:52.492: INFO: Scaling statefulset ss2 to 0 +Oct 27 14:27:02.545: INFO: Waiting for statefulset status.replicas updated to 0 +Oct 27 14:27:02.557: INFO: Deleting statefulset ss2 +[AfterEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:27:02.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-2439" for this suite. +•{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":346,"completed":82,"skipped":1647,"failed":0} +SSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:27:02.628: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-6822 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name projected-configmap-test-volume-7c5be36d-b6d3-40bf-9eff-8f222912172f +STEP: Creating a pod to test consume configMaps +Oct 27 14:27:02.850: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8b9d033f-2eaf-4bc3-ac79-d17daa719202" in namespace "projected-6822" to be "Succeeded or Failed" +Oct 27 14:27:02.861: INFO: Pod "pod-projected-configmaps-8b9d033f-2eaf-4bc3-ac79-d17daa719202": Phase="Pending", Reason="", readiness=false. Elapsed: 10.772402ms +Oct 27 14:27:04.874: INFO: Pod "pod-projected-configmaps-8b9d033f-2eaf-4bc3-ac79-d17daa719202": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023804852s +Oct 27 14:27:06.886: INFO: Pod "pod-projected-configmaps-8b9d033f-2eaf-4bc3-ac79-d17daa719202": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035385012s +STEP: Saw pod success +Oct 27 14:27:06.886: INFO: Pod "pod-projected-configmaps-8b9d033f-2eaf-4bc3-ac79-d17daa719202" satisfied condition "Succeeded or Failed" +Oct 27 14:27:06.897: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod pod-projected-configmaps-8b9d033f-2eaf-4bc3-ac79-d17daa719202 container agnhost-container: +STEP: delete the pod +Oct 27 14:27:06.970: INFO: Waiting for pod pod-projected-configmaps-8b9d033f-2eaf-4bc3-ac79-d17daa719202 to disappear +Oct 27 14:27:06.981: INFO: Pod pod-projected-configmaps-8b9d033f-2eaf-4bc3-ac79-d17daa719202 no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:27:06.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-6822" for this suite. +•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":346,"completed":83,"skipped":1654,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should be able to deny attaching pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:27:07.015: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-456 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 27 14:27:08.042: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +Oct 27 14:27:10.082: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941628, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941628, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941628, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941628, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 27 14:27:13.112: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should be able to deny attaching pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Registering the webhook via the AdmissionRegistration API +STEP: create a pod +STEP: 'kubectl attach' the pod, should be denied by the webhook +Oct 27 14:27:17.241: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=webhook-456 attach --namespace=webhook-456 to-be-attached-pod -i -c=container1' +Oct 27 14:27:17.452: INFO: rc: 1 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:27:17.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-456" for this suite. +STEP: Destroying namespace "webhook-456-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":346,"completed":84,"skipped":1669,"failed":0} +SSSS +------------------------------ +[sig-network] Networking Granular Checks: Pods + should function for intra-pod communication: http [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Networking + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:27:17.573: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename pod-network-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pod-network-test-4983 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should function for intra-pod communication: http [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Performing setup for networking test in namespace pod-network-test-4983 +STEP: creating a selector +STEP: Creating the service pods in kubernetes +Oct 27 14:27:17.778: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +Oct 27 14:27:17.854: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:27:19.867: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:27:21.866: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:27:23.866: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:27:25.867: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:27:27.865: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:27:29.867: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:27:31.867: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:27:33.867: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:27:35.866: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:27:37.866: INFO: The status of Pod netserver-0 is Running (Ready = true) +Oct 27 14:27:37.888: INFO: The status of Pod netserver-1 is Running (Ready = true) +STEP: Creating test pods +Oct 27 14:27:39.953: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 +Oct 27 14:27:39.953: INFO: Breadth first check of 100.96.0.47 on host 10.250.0.2... +Oct 27 14:27:39.965: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://100.96.1.87:9080/dial?request=hostname&protocol=http&host=100.96.0.47&port=8083&tries=1'] Namespace:pod-network-test-4983 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 14:27:39.965: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 14:27:40.238: INFO: Waiting for responses: map[] +Oct 27 14:27:40.238: INFO: reached 100.96.0.47 after 0/1 tries +Oct 27 14:27:40.238: INFO: Breadth first check of 100.96.1.86 on host 10.250.0.3... +Oct 27 14:27:40.251: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://100.96.1.87:9080/dial?request=hostname&protocol=http&host=100.96.1.86&port=8083&tries=1'] Namespace:pod-network-test-4983 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 14:27:40.251: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 14:27:40.618: INFO: Waiting for responses: map[] +Oct 27 14:27:40.618: INFO: reached 100.96.1.86 after 0/1 tries +Oct 27 14:27:40.618: INFO: Going to retry 0 out of 2 pods.... +[AfterEach] [sig-network] Networking + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:27:40.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pod-network-test-4983" for this suite. +•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":346,"completed":85,"skipped":1673,"failed":0} +SSSS +------------------------------ +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + should include custom resource definition resources in discovery documents [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:27:40.653: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename custom-resource-definition +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in custom-resource-definition-3067 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should include custom resource definition resources in discovery documents [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: fetching the /apis discovery document +STEP: finding the apiextensions.k8s.io API group in the /apis discovery document +STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document +STEP: fetching the /apis/apiextensions.k8s.io discovery document +STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document +STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document +STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:27:40.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "custom-resource-definition-3067" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":346,"completed":86,"skipped":1677,"failed":0} +SS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should include webhook resources in discovery documents [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:27:40.901: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-8299 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 27 14:27:41.429: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941661, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941661, loc:(*time.Location)(0xa09bc80)}}, Reason:"NewReplicaSetCreated", Message:"Created new replica set \"sample-webhook-deployment-78988fc6cd\""}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941661, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941661, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 27 14:27:44.459: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should include webhook resources in discovery documents [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: fetching the /apis discovery document +STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document +STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document +STEP: fetching the /apis/admissionregistration.k8s.io discovery document +STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document +STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document +STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:27:44.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-8299" for this suite. +STEP: Destroying namespace "webhook-8299-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":346,"completed":87,"skipped":1679,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] EndpointSliceMirroring + should mirror a custom Endpoints resource through create update and delete [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] EndpointSliceMirroring + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:27:44.632: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename endpointslicemirroring +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in endpointslicemirroring-8420 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] EndpointSliceMirroring + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslicemirroring.go:39 +[It] should mirror a custom Endpoints resource through create update and delete [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: mirroring a new custom Endpoint +Oct 27 14:27:44.869: INFO: Waiting for at least 1 EndpointSlice to exist, got 0 +STEP: mirroring an update to a custom Endpoint +STEP: mirroring deletion of a custom Endpoint +[AfterEach] [sig-network] EndpointSliceMirroring + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:27:46.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "endpointslicemirroring-8420" for this suite. +•{"msg":"PASSED [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","total":346,"completed":88,"skipped":1706,"failed":0} +SSSSSSSS +------------------------------ +[sig-network] Service endpoints latency + should not be very high [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Service endpoints latency + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:27:46.961: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename svc-latency +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in svc-latency-5361 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not be very high [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:27:47.146: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: creating replication controller svc-latency-rc in namespace svc-latency-5361 +I1027 14:27:47.165801 5683 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-5361, replica count: 1 +I1027 14:27:48.226324 5683 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I1027 14:27:49.226572 5683 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Oct 27 14:27:49.345: INFO: Created: latency-svc-lcqzd +Oct 27 14:27:49.351: INFO: Got endpoints: latency-svc-lcqzd [24.53637ms] +Oct 27 14:27:49.368: INFO: Created: latency-svc-xmbn4 +Oct 27 14:27:49.374: INFO: Got endpoints: latency-svc-xmbn4 [22.423692ms] +Oct 27 14:27:49.377: INFO: Created: latency-svc-qkkss +Oct 27 14:27:49.380: INFO: Got endpoints: latency-svc-qkkss [28.677153ms] +Oct 27 14:27:49.384: INFO: Created: latency-svc-ts58z +Oct 27 14:27:49.391: INFO: Got endpoints: latency-svc-ts58z [39.403203ms] +Oct 27 14:27:49.392: INFO: Created: latency-svc-2xgxb +Oct 27 14:27:49.428: INFO: Got endpoints: latency-svc-2xgxb [76.929483ms] +Oct 27 14:27:49.430: INFO: Created: latency-svc-j5hfd +Oct 27 14:27:49.435: INFO: Created: latency-svc-4b8dh +Oct 27 14:27:49.435: INFO: Got endpoints: latency-svc-j5hfd [83.978004ms] +Oct 27 14:27:49.438: INFO: Got endpoints: latency-svc-4b8dh [86.989086ms] +Oct 27 14:27:49.442: INFO: Created: latency-svc-45r9h +Oct 27 14:27:49.447: INFO: Got endpoints: latency-svc-45r9h [95.830097ms] +Oct 27 14:27:49.448: INFO: Created: latency-svc-dw24p +Oct 27 14:27:49.451: INFO: Got endpoints: latency-svc-dw24p [99.549397ms] +Oct 27 14:27:49.454: INFO: Created: latency-svc-958nz +Oct 27 14:27:49.459: INFO: Got endpoints: latency-svc-958nz [23.836112ms] +Oct 27 14:27:49.460: INFO: Created: latency-svc-2gbgc +Oct 27 14:27:49.530: INFO: Got endpoints: latency-svc-2gbgc [178.566917ms] +Oct 27 14:27:49.533: INFO: Created: latency-svc-zwb6z +Oct 27 14:27:49.539: INFO: Got endpoints: latency-svc-zwb6z [187.575087ms] +Oct 27 14:27:49.540: INFO: Created: latency-svc-g4t7j +Oct 27 14:27:49.545: INFO: Got endpoints: latency-svc-g4t7j [193.626817ms] +Oct 27 14:27:49.546: INFO: Created: latency-svc-w7cdt +Oct 27 14:27:49.629: INFO: Got endpoints: latency-svc-w7cdt [277.446988ms] +Oct 27 14:27:49.634: INFO: Created: latency-svc-5hk98 +Oct 27 14:27:49.639: INFO: Created: latency-svc-6rbxs +Oct 27 14:27:49.642: INFO: Got endpoints: latency-svc-5hk98 [290.250664ms] +Oct 27 14:27:49.642: INFO: Got endpoints: latency-svc-6rbxs [290.729566ms] +Oct 27 14:27:49.645: INFO: Created: latency-svc-j7j2q +Oct 27 14:27:49.649: INFO: Got endpoints: latency-svc-j7j2q [298.114577ms] +Oct 27 14:27:49.730: INFO: Created: latency-svc-87cfz +Oct 27 14:27:49.736: INFO: Got endpoints: latency-svc-87cfz [361.818571ms] +Oct 27 14:27:49.737: INFO: Created: latency-svc-bzmfq +Oct 27 14:27:49.742: INFO: Got endpoints: latency-svc-bzmfq [362.50594ms] +Oct 27 14:27:49.743: INFO: Created: latency-svc-n28mb +Oct 27 14:27:49.746: INFO: Got endpoints: latency-svc-n28mb [355.114876ms] +Oct 27 14:27:49.749: INFO: Created: latency-svc-s5v5d +Oct 27 14:27:49.829: INFO: Got endpoints: latency-svc-s5v5d [400.350961ms] +Oct 27 14:27:49.831: INFO: Created: latency-svc-skd9b +Oct 27 14:27:49.837: INFO: Got endpoints: latency-svc-skd9b [398.653911ms] +Oct 27 14:27:49.837: INFO: Created: latency-svc-fd6rb +Oct 27 14:27:49.844: INFO: Got endpoints: latency-svc-fd6rb [397.06511ms] +Oct 27 14:27:49.849: INFO: Created: latency-svc-kgzj8 +Oct 27 14:27:49.854: INFO: Got endpoints: latency-svc-kgzj8 [402.697895ms] +Oct 27 14:27:49.854: INFO: Created: latency-svc-6zttw +Oct 27 14:27:49.929: INFO: Got endpoints: latency-svc-6zttw [469.17817ms] +Oct 27 14:27:49.930: INFO: Created: latency-svc-g2kl5 +Oct 27 14:27:49.935: INFO: Got endpoints: latency-svc-g2kl5 [405.418259ms] +Oct 27 14:27:49.938: INFO: Created: latency-svc-6vmk5 +Oct 27 14:27:49.942: INFO: Got endpoints: latency-svc-6vmk5 [403.213851ms] +Oct 27 14:27:49.946: INFO: Created: latency-svc-np4f4 +Oct 27 14:27:49.952: INFO: Created: latency-svc-9kz6r +Oct 27 14:27:49.954: INFO: Got endpoints: latency-svc-np4f4 [408.639954ms] +Oct 27 14:27:49.957: INFO: Got endpoints: latency-svc-9kz6r [328.044908ms] +Oct 27 14:27:50.029: INFO: Created: latency-svc-j84h2 +Oct 27 14:27:50.033: INFO: Got endpoints: latency-svc-j84h2 [391.65368ms] +Oct 27 14:27:50.038: INFO: Created: latency-svc-f4kx9 +Oct 27 14:27:50.043: INFO: Got endpoints: latency-svc-f4kx9 [400.630361ms] +Oct 27 14:27:50.045: INFO: Created: latency-svc-tgbht +Oct 27 14:27:50.052: INFO: Created: latency-svc-cfqqp +Oct 27 14:27:50.052: INFO: Got endpoints: latency-svc-tgbht [402.59307ms] +Oct 27 14:27:50.131: INFO: Got endpoints: latency-svc-cfqqp [395.071968ms] +Oct 27 14:27:50.133: INFO: Created: latency-svc-2w82r +Oct 27 14:27:50.143: INFO: Created: latency-svc-6zr6c +Oct 27 14:27:50.144: INFO: Got endpoints: latency-svc-2w82r [397.77062ms] +Oct 27 14:27:50.146: INFO: Got endpoints: latency-svc-6zr6c [403.225801ms] +Oct 27 14:27:50.146: INFO: Created: latency-svc-rgmk8 +Oct 27 14:27:50.152: INFO: Got endpoints: latency-svc-rgmk8 [322.952184ms] +Oct 27 14:27:50.152: INFO: Created: latency-svc-7lx5n +Oct 27 14:27:50.231: INFO: Got endpoints: latency-svc-7lx5n [393.450254ms] +Oct 27 14:27:50.232: INFO: Created: latency-svc-2dtvk +Oct 27 14:27:50.239: INFO: Created: latency-svc-q4mmz +Oct 27 14:27:50.240: INFO: Got endpoints: latency-svc-2dtvk [395.437418ms] +Oct 27 14:27:50.244: INFO: Got endpoints: latency-svc-q4mmz [390.368902ms] +Oct 27 14:27:50.246: INFO: Created: latency-svc-trbn9 +Oct 27 14:27:50.251: INFO: Got endpoints: latency-svc-trbn9 [322.608965ms] +Oct 27 14:27:50.254: INFO: Created: latency-svc-jtk45 +Oct 27 14:27:50.257: INFO: Got endpoints: latency-svc-jtk45 [321.970828ms] +Oct 27 14:27:50.260: INFO: Created: latency-svc-xk6lt +Oct 27 14:27:50.337: INFO: Got endpoints: latency-svc-xk6lt [394.617781ms] +Oct 27 14:27:50.342: INFO: Created: latency-svc-cq4qs +Oct 27 14:27:50.347: INFO: Created: latency-svc-g2mbh +Oct 27 14:27:50.348: INFO: Got endpoints: latency-svc-cq4qs [394.195802ms] +Oct 27 14:27:50.350: INFO: Got endpoints: latency-svc-g2mbh [392.925303ms] +Oct 27 14:27:50.354: INFO: Created: latency-svc-nfkkm +Oct 27 14:27:50.358: INFO: Got endpoints: latency-svc-nfkkm [324.585035ms] +Oct 27 14:27:50.430: INFO: Created: latency-svc-jfx7k +Oct 27 14:27:50.437: INFO: Created: latency-svc-m7dgv +Oct 27 14:27:50.442: INFO: Created: latency-svc-fpvz4 +Oct 27 14:27:50.447: INFO: Created: latency-svc-47n87 +Oct 27 14:27:50.452: INFO: Created: latency-svc-nhsk7 +Oct 27 14:27:50.459: INFO: Created: latency-svc-tfshw +Oct 27 14:27:50.467: INFO: Created: latency-svc-gzfnp +Oct 27 14:27:50.539: INFO: Created: latency-svc-rpkxf +Oct 27 14:27:50.552: INFO: Created: latency-svc-kgskd +Oct 27 14:27:50.639: INFO: Created: latency-svc-8l686 +Oct 27 14:27:50.645: INFO: Created: latency-svc-h6rp5 +Oct 27 14:27:50.735: INFO: Created: latency-svc-ksc72 +Oct 27 14:27:50.831: INFO: Created: latency-svc-2mb5h +Oct 27 14:27:50.929: INFO: Got endpoints: latency-svc-ksc72 [592.1531ms] +Oct 27 14:27:50.930: INFO: Got endpoints: latency-svc-fpvz4 [799.343549ms] +Oct 27 14:27:50.930: INFO: Got endpoints: latency-svc-m7dgv [878.199047ms] +Oct 27 14:27:50.932: INFO: Got endpoints: latency-svc-47n87 [787.980962ms] +Oct 27 14:27:50.932: INFO: Created: latency-svc-7kzss +Oct 27 14:27:50.936: INFO: Got endpoints: latency-svc-tfshw [784.116818ms] +Oct 27 14:27:50.936: INFO: Got endpoints: latency-svc-nhsk7 [790.180648ms] +Oct 27 14:27:50.936: INFO: Got endpoints: latency-svc-gzfnp [705.314852ms] +Oct 27 14:27:51.034: INFO: Created: latency-svc-w6lhr +Oct 27 14:27:51.035: INFO: Got endpoints: latency-svc-rpkxf [794.729129ms] +Oct 27 14:27:51.035: INFO: Got endpoints: latency-svc-kgskd [790.357284ms] +Oct 27 14:27:51.035: INFO: Got endpoints: latency-svc-jfx7k [991.856002ms] +Oct 27 14:27:51.035: INFO: Got endpoints: latency-svc-7kzss [684.788247ms] +Oct 27 14:27:51.035: INFO: Got endpoints: latency-svc-h6rp5 [777.346227ms] +Oct 27 14:27:51.035: INFO: Got endpoints: latency-svc-8l686 [783.737605ms] +Oct 27 14:27:51.035: INFO: Got endpoints: latency-svc-2mb5h [687.418302ms] +Oct 27 14:27:51.131: INFO: Created: latency-svc-8gc9w +Oct 27 14:27:51.131: INFO: Got endpoints: latency-svc-w6lhr [772.907888ms] +Oct 27 14:27:51.137: INFO: Created: latency-svc-vmzqw +Oct 27 14:27:51.142: INFO: Got endpoints: latency-svc-8gc9w [212.702806ms] +Oct 27 14:27:51.142: INFO: Created: latency-svc-cqdmg +Oct 27 14:27:51.148: INFO: Created: latency-svc-k48jb +Oct 27 14:27:51.232: INFO: Created: latency-svc-q6xn9 +Oct 27 14:27:51.237: INFO: Created: latency-svc-s2dc8 +Oct 27 14:27:51.238: INFO: Got endpoints: latency-svc-q6xn9 [305.977947ms] +Oct 27 14:27:51.243: INFO: Got endpoints: latency-svc-cqdmg [312.486288ms] +Oct 27 14:27:51.243: INFO: Got endpoints: latency-svc-vmzqw [312.541696ms] +Oct 27 14:27:51.243: INFO: Got endpoints: latency-svc-k48jb [307.003898ms] +Oct 27 14:27:51.247: INFO: Created: latency-svc-lcq24 +Oct 27 14:27:51.253: INFO: Created: latency-svc-vb9wg +Oct 27 14:27:51.258: INFO: Created: latency-svc-x6vqk +Oct 27 14:27:51.331: INFO: Created: latency-svc-24lrk +Oct 27 14:27:51.335: INFO: Got endpoints: latency-svc-x6vqk [300.314111ms] +Oct 27 14:27:51.335: INFO: Got endpoints: latency-svc-s2dc8 [399.46048ms] +Oct 27 14:27:51.335: INFO: Got endpoints: latency-svc-lcq24 [399.590439ms] +Oct 27 14:27:51.336: INFO: Got endpoints: latency-svc-vb9wg [301.10331ms] +Oct 27 14:27:51.338: INFO: Created: latency-svc-d8q29 +Oct 27 14:27:51.341: INFO: Created: latency-svc-t9kqz +Oct 27 14:27:51.346: INFO: Created: latency-svc-kbmcb +Oct 27 14:27:51.352: INFO: Created: latency-svc-qbd64 +Oct 27 14:27:51.357: INFO: Created: latency-svc-jqwgg +Oct 27 14:27:51.431: INFO: Created: latency-svc-wk55p +Oct 27 14:27:51.432: INFO: Got endpoints: latency-svc-d8q29 [397.630429ms] +Oct 27 14:27:51.433: INFO: Got endpoints: latency-svc-24lrk [398.23459ms] +Oct 27 14:27:51.438: INFO: Created: latency-svc-gt4sh +Oct 27 14:27:51.444: INFO: Created: latency-svc-vt9pn +Oct 27 14:27:51.449: INFO: Got endpoints: latency-svc-t9kqz [414.526995ms] +Oct 27 14:27:51.450: INFO: Created: latency-svc-hb6hj +Oct 27 14:27:51.455: INFO: Created: latency-svc-kfjml +Oct 27 14:27:51.531: INFO: Created: latency-svc-wjt6m +Oct 27 14:27:51.536: INFO: Got endpoints: latency-svc-kbmcb [500.28252ms] +Oct 27 14:27:51.538: INFO: Created: latency-svc-l6xsf +Oct 27 14:27:51.544: INFO: Created: latency-svc-97gcn +Oct 27 14:27:51.550: INFO: Got endpoints: latency-svc-qbd64 [515.256536ms] +Oct 27 14:27:51.551: INFO: Created: latency-svc-j7jxw +Oct 27 14:27:51.634: INFO: Got endpoints: latency-svc-jqwgg [502.877183ms] +Oct 27 14:27:51.634: INFO: Created: latency-svc-qv94t +Oct 27 14:27:51.642: INFO: Created: latency-svc-pwkwv +Oct 27 14:27:51.647: INFO: Created: latency-svc-sntvj +Oct 27 14:27:51.649: INFO: Got endpoints: latency-svc-wk55p [506.816554ms] +Oct 27 14:27:51.657: INFO: Created: latency-svc-snnbn +Oct 27 14:27:51.730: INFO: Created: latency-svc-j27d8 +Oct 27 14:27:51.733: INFO: Got endpoints: latency-svc-gt4sh [495.375344ms] +Oct 27 14:27:51.736: INFO: Created: latency-svc-q5bst +Oct 27 14:27:51.741: INFO: Created: latency-svc-nzmt8 +Oct 27 14:27:51.752: INFO: Created: latency-svc-2zf7g +Oct 27 14:27:51.752: INFO: Got endpoints: latency-svc-vt9pn [508.790537ms] +Oct 27 14:27:51.831: INFO: Got endpoints: latency-svc-hb6hj [588.403741ms] +Oct 27 14:27:51.840: INFO: Created: latency-svc-79srm +Oct 27 14:27:51.850: INFO: Got endpoints: latency-svc-kfjml [606.753416ms] +Oct 27 14:27:51.850: INFO: Created: latency-svc-v274r +Oct 27 14:27:51.867: INFO: Created: latency-svc-wcw7q +Oct 27 14:27:51.934: INFO: Got endpoints: latency-svc-wjt6m [598.257451ms] +Oct 27 14:27:51.952: INFO: Created: latency-svc-d6746 +Oct 27 14:27:51.955: INFO: Got endpoints: latency-svc-l6xsf [619.937389ms] +Oct 27 14:27:52.032: INFO: Created: latency-svc-tb6jp +Oct 27 14:27:52.032: INFO: Got endpoints: latency-svc-97gcn [696.569865ms] +Oct 27 14:27:52.050: INFO: Created: latency-svc-vvmhj +Oct 27 14:27:52.051: INFO: Got endpoints: latency-svc-j7jxw [715.290599ms] +Oct 27 14:27:52.068: INFO: Created: latency-svc-xl25k +Oct 27 14:27:52.132: INFO: Got endpoints: latency-svc-qv94t [699.221807ms] +Oct 27 14:27:52.159: INFO: Got endpoints: latency-svc-pwkwv [725.697101ms] +Oct 27 14:27:52.160: INFO: Created: latency-svc-94878 +Oct 27 14:27:52.176: INFO: Created: latency-svc-vxgrm +Oct 27 14:27:52.252: INFO: Got endpoints: latency-svc-snnbn [716.439374ms] +Oct 27 14:27:52.252: INFO: Got endpoints: latency-svc-sntvj [803.006779ms] +Oct 27 14:27:52.269: INFO: Created: latency-svc-6xcb6 +Oct 27 14:27:52.275: INFO: Created: latency-svc-lcqmn +Oct 27 14:27:52.330: INFO: Got endpoints: latency-svc-j27d8 [780.224127ms] +Oct 27 14:27:52.348: INFO: Created: latency-svc-vl8sl +Oct 27 14:27:52.349: INFO: Got endpoints: latency-svc-q5bst [715.329689ms] +Oct 27 14:27:52.367: INFO: Created: latency-svc-cplkl +Oct 27 14:27:52.433: INFO: Got endpoints: latency-svc-nzmt8 [784.055988ms] +Oct 27 14:27:52.450: INFO: Created: latency-svc-h7ml4 +Oct 27 14:27:52.450: INFO: Got endpoints: latency-svc-2zf7g [716.95554ms] +Oct 27 14:27:52.467: INFO: Created: latency-svc-fx74m +Oct 27 14:27:52.532: INFO: Got endpoints: latency-svc-79srm [780.773952ms] +Oct 27 14:27:52.551: INFO: Created: latency-svc-gfmws +Oct 27 14:27:52.551: INFO: Got endpoints: latency-svc-v274r [719.853889ms] +Oct 27 14:27:52.569: INFO: Created: latency-svc-8thxn +Oct 27 14:27:52.631: INFO: Got endpoints: latency-svc-wcw7q [780.890001ms] +Oct 27 14:27:52.648: INFO: Created: latency-svc-gztgc +Oct 27 14:27:52.649: INFO: Got endpoints: latency-svc-d6746 [715.268821ms] +Oct 27 14:27:52.667: INFO: Created: latency-svc-9hcff +Oct 27 14:27:52.702: INFO: Got endpoints: latency-svc-tb6jp [746.282931ms] +Oct 27 14:27:52.721: INFO: Created: latency-svc-l4bwn +Oct 27 14:27:52.752: INFO: Got endpoints: latency-svc-vvmhj [719.346999ms] +Oct 27 14:27:52.770: INFO: Created: latency-svc-9kf7h +Oct 27 14:27:52.799: INFO: Got endpoints: latency-svc-xl25k [748.191652ms] +Oct 27 14:27:52.819: INFO: Created: latency-svc-cq6hs +Oct 27 14:27:52.900: INFO: Got endpoints: latency-svc-94878 [768.571316ms] +Oct 27 14:27:52.918: INFO: Created: latency-svc-zlxlt +Oct 27 14:27:52.953: INFO: Got endpoints: latency-svc-vxgrm [794.381826ms] +Oct 27 14:27:52.971: INFO: Created: latency-svc-skhfs +Oct 27 14:27:53.001: INFO: Got endpoints: latency-svc-6xcb6 [748.381575ms] +Oct 27 14:27:53.020: INFO: Created: latency-svc-p4msn +Oct 27 14:27:53.050: INFO: Got endpoints: latency-svc-lcqmn [797.465926ms] +Oct 27 14:27:53.067: INFO: Created: latency-svc-jd8sv +Oct 27 14:27:53.099: INFO: Got endpoints: latency-svc-vl8sl [768.905933ms] +Oct 27 14:27:53.120: INFO: Created: latency-svc-24b7g +Oct 27 14:27:53.151: INFO: Got endpoints: latency-svc-cplkl [801.532869ms] +Oct 27 14:27:53.168: INFO: Created: latency-svc-pzsds +Oct 27 14:27:53.201: INFO: Got endpoints: latency-svc-h7ml4 [768.04682ms] +Oct 27 14:27:53.218: INFO: Created: latency-svc-c8n4s +Oct 27 14:27:53.250: INFO: Got endpoints: latency-svc-fx74m [799.827527ms] +Oct 27 14:27:53.268: INFO: Created: latency-svc-5h8rp +Oct 27 14:27:53.302: INFO: Got endpoints: latency-svc-gfmws [769.063866ms] +Oct 27 14:27:53.320: INFO: Created: latency-svc-s9lnb +Oct 27 14:27:53.349: INFO: Got endpoints: latency-svc-8thxn [797.69838ms] +Oct 27 14:27:53.367: INFO: Created: latency-svc-dklcl +Oct 27 14:27:53.399: INFO: Got endpoints: latency-svc-gztgc [768.443397ms] +Oct 27 14:27:53.418: INFO: Created: latency-svc-6tmxm +Oct 27 14:27:53.449: INFO: Got endpoints: latency-svc-9hcff [800.085989ms] +Oct 27 14:27:53.468: INFO: Created: latency-svc-kpwht +Oct 27 14:27:53.503: INFO: Got endpoints: latency-svc-l4bwn [800.902014ms] +Oct 27 14:27:53.521: INFO: Created: latency-svc-c9hzk +Oct 27 14:27:53.551: INFO: Got endpoints: latency-svc-9kf7h [798.960084ms] +Oct 27 14:27:53.570: INFO: Created: latency-svc-rmwlz +Oct 27 14:27:53.601: INFO: Got endpoints: latency-svc-cq6hs [801.999884ms] +Oct 27 14:27:53.620: INFO: Created: latency-svc-lcjwv +Oct 27 14:27:53.651: INFO: Got endpoints: latency-svc-zlxlt [750.274577ms] +Oct 27 14:27:53.676: INFO: Created: latency-svc-5hm5m +Oct 27 14:27:53.700: INFO: Got endpoints: latency-svc-skhfs [747.205059ms] +Oct 27 14:27:53.720: INFO: Created: latency-svc-gqw2g +Oct 27 14:27:53.749: INFO: Got endpoints: latency-svc-p4msn [748.130833ms] +Oct 27 14:27:53.767: INFO: Created: latency-svc-zlpkx +Oct 27 14:27:53.803: INFO: Got endpoints: latency-svc-jd8sv [752.973158ms] +Oct 27 14:27:53.820: INFO: Created: latency-svc-jpqsf +Oct 27 14:27:53.849: INFO: Got endpoints: latency-svc-24b7g [749.405197ms] +Oct 27 14:27:53.867: INFO: Created: latency-svc-blpq9 +Oct 27 14:27:53.899: INFO: Got endpoints: latency-svc-pzsds [748.313945ms] +Oct 27 14:27:53.917: INFO: Created: latency-svc-7rqth +Oct 27 14:27:53.951: INFO: Got endpoints: latency-svc-c8n4s [749.964477ms] +Oct 27 14:27:53.970: INFO: Created: latency-svc-hnt5w +Oct 27 14:27:54.001: INFO: Got endpoints: latency-svc-5h8rp [750.831494ms] +Oct 27 14:27:54.018: INFO: Created: latency-svc-fntz6 +Oct 27 14:27:54.050: INFO: Got endpoints: latency-svc-s9lnb [748.787993ms] +Oct 27 14:27:54.068: INFO: Created: latency-svc-wml4g +Oct 27 14:27:54.102: INFO: Got endpoints: latency-svc-dklcl [753.138646ms] +Oct 27 14:27:54.120: INFO: Created: latency-svc-f74qg +Oct 27 14:27:54.150: INFO: Got endpoints: latency-svc-6tmxm [750.719195ms] +Oct 27 14:27:54.167: INFO: Created: latency-svc-sl4v4 +Oct 27 14:27:54.199: INFO: Got endpoints: latency-svc-kpwht [749.977894ms] +Oct 27 14:27:54.220: INFO: Created: latency-svc-c4d5s +Oct 27 14:27:54.250: INFO: Got endpoints: latency-svc-c9hzk [747.623682ms] +Oct 27 14:27:54.268: INFO: Created: latency-svc-2jmxb +Oct 27 14:27:54.300: INFO: Got endpoints: latency-svc-rmwlz [749.299818ms] +Oct 27 14:27:54.318: INFO: Created: latency-svc-r676m +Oct 27 14:27:54.351: INFO: Got endpoints: latency-svc-lcjwv [749.801553ms] +Oct 27 14:27:54.369: INFO: Created: latency-svc-72v24 +Oct 27 14:27:54.401: INFO: Got endpoints: latency-svc-5hm5m [749.844772ms] +Oct 27 14:27:54.418: INFO: Created: latency-svc-64gsd +Oct 27 14:27:54.450: INFO: Got endpoints: latency-svc-gqw2g [749.838761ms] +Oct 27 14:27:54.468: INFO: Created: latency-svc-h6p4v +Oct 27 14:27:54.500: INFO: Got endpoints: latency-svc-zlpkx [750.713229ms] +Oct 27 14:27:54.518: INFO: Created: latency-svc-2w88x +Oct 27 14:27:54.551: INFO: Got endpoints: latency-svc-jpqsf [747.843882ms] +Oct 27 14:27:54.569: INFO: Created: latency-svc-mvmlm +Oct 27 14:27:54.600: INFO: Got endpoints: latency-svc-blpq9 [751.295641ms] +Oct 27 14:27:54.618: INFO: Created: latency-svc-s6gx2 +Oct 27 14:27:54.652: INFO: Got endpoints: latency-svc-7rqth [752.944955ms] +Oct 27 14:27:54.676: INFO: Created: latency-svc-wd9fc +Oct 27 14:27:54.702: INFO: Got endpoints: latency-svc-hnt5w [750.986879ms] +Oct 27 14:27:54.719: INFO: Created: latency-svc-vb7wn +Oct 27 14:27:54.749: INFO: Got endpoints: latency-svc-fntz6 [747.812077ms] +Oct 27 14:27:54.767: INFO: Created: latency-svc-sdtwg +Oct 27 14:27:54.800: INFO: Got endpoints: latency-svc-wml4g [749.769412ms] +Oct 27 14:27:54.819: INFO: Created: latency-svc-7xnfp +Oct 27 14:27:54.849: INFO: Got endpoints: latency-svc-f74qg [747.019461ms] +Oct 27 14:27:54.867: INFO: Created: latency-svc-wf9bg +Oct 27 14:27:54.901: INFO: Got endpoints: latency-svc-sl4v4 [750.834022ms] +Oct 27 14:27:54.919: INFO: Created: latency-svc-55pc2 +Oct 27 14:27:54.950: INFO: Got endpoints: latency-svc-c4d5s [750.801229ms] +Oct 27 14:27:54.969: INFO: Created: latency-svc-xv6jl +Oct 27 14:27:55.000: INFO: Got endpoints: latency-svc-2jmxb [749.863291ms] +Oct 27 14:27:55.021: INFO: Created: latency-svc-5cfld +Oct 27 14:27:55.053: INFO: Got endpoints: latency-svc-r676m [752.6465ms] +Oct 27 14:27:55.071: INFO: Created: latency-svc-ghnwc +Oct 27 14:27:55.100: INFO: Got endpoints: latency-svc-72v24 [748.804995ms] +Oct 27 14:27:55.117: INFO: Created: latency-svc-pn7xv +Oct 27 14:27:55.149: INFO: Got endpoints: latency-svc-64gsd [748.239951ms] +Oct 27 14:27:55.169: INFO: Created: latency-svc-xdlp5 +Oct 27 14:27:55.201: INFO: Got endpoints: latency-svc-h6p4v [750.243404ms] +Oct 27 14:27:55.220: INFO: Created: latency-svc-xvfg2 +Oct 27 14:27:55.250: INFO: Got endpoints: latency-svc-2w88x [750.869134ms] +Oct 27 14:27:55.270: INFO: Created: latency-svc-gmflb +Oct 27 14:27:55.299: INFO: Got endpoints: latency-svc-mvmlm [748.697844ms] +Oct 27 14:27:55.318: INFO: Created: latency-svc-rf67f +Oct 27 14:27:55.350: INFO: Got endpoints: latency-svc-s6gx2 [750.094071ms] +Oct 27 14:27:55.368: INFO: Created: latency-svc-rws8s +Oct 27 14:27:55.400: INFO: Got endpoints: latency-svc-wd9fc [747.298913ms] +Oct 27 14:27:55.418: INFO: Created: latency-svc-t2wtj +Oct 27 14:27:55.450: INFO: Got endpoints: latency-svc-vb7wn [747.824811ms] +Oct 27 14:27:55.469: INFO: Created: latency-svc-g2tq4 +Oct 27 14:27:55.501: INFO: Got endpoints: latency-svc-sdtwg [752.140528ms] +Oct 27 14:27:55.518: INFO: Created: latency-svc-ff6rk +Oct 27 14:27:55.552: INFO: Got endpoints: latency-svc-7xnfp [751.802426ms] +Oct 27 14:27:55.574: INFO: Created: latency-svc-pgsgs +Oct 27 14:27:55.607: INFO: Got endpoints: latency-svc-wf9bg [757.554624ms] +Oct 27 14:27:55.625: INFO: Created: latency-svc-zh2gd +Oct 27 14:27:55.650: INFO: Got endpoints: latency-svc-55pc2 [749.451929ms] +Oct 27 14:27:55.669: INFO: Created: latency-svc-22792 +Oct 27 14:27:55.699: INFO: Got endpoints: latency-svc-xv6jl [749.282521ms] +Oct 27 14:27:55.717: INFO: Created: latency-svc-lzrjz +Oct 27 14:27:55.752: INFO: Got endpoints: latency-svc-5cfld [751.297488ms] +Oct 27 14:27:55.770: INFO: Created: latency-svc-rm4lk +Oct 27 14:27:55.800: INFO: Got endpoints: latency-svc-ghnwc [746.898826ms] +Oct 27 14:27:55.817: INFO: Created: latency-svc-rqx8g +Oct 27 14:27:55.850: INFO: Got endpoints: latency-svc-pn7xv [750.20363ms] +Oct 27 14:27:55.868: INFO: Created: latency-svc-2t7p4 +Oct 27 14:27:55.902: INFO: Got endpoints: latency-svc-xdlp5 [753.393759ms] +Oct 27 14:27:55.921: INFO: Created: latency-svc-98ffx +Oct 27 14:27:55.949: INFO: Got endpoints: latency-svc-xvfg2 [748.541955ms] +Oct 27 14:27:55.967: INFO: Created: latency-svc-bjlzk +Oct 27 14:27:55.999: INFO: Got endpoints: latency-svc-gmflb [748.828409ms] +Oct 27 14:27:56.020: INFO: Created: latency-svc-7228f +Oct 27 14:27:56.049: INFO: Got endpoints: latency-svc-rf67f [749.664982ms] +Oct 27 14:27:56.068: INFO: Created: latency-svc-vh6kf +Oct 27 14:27:56.099: INFO: Got endpoints: latency-svc-rws8s [748.15987ms] +Oct 27 14:27:56.116: INFO: Created: latency-svc-lf5r5 +Oct 27 14:27:56.152: INFO: Got endpoints: latency-svc-t2wtj [752.529814ms] +Oct 27 14:27:56.170: INFO: Created: latency-svc-7jc6v +Oct 27 14:27:56.201: INFO: Got endpoints: latency-svc-g2tq4 [750.693905ms] +Oct 27 14:27:56.221: INFO: Created: latency-svc-92nbc +Oct 27 14:27:56.249: INFO: Got endpoints: latency-svc-ff6rk [748.444563ms] +Oct 27 14:27:56.268: INFO: Created: latency-svc-h6cq2 +Oct 27 14:27:56.300: INFO: Got endpoints: latency-svc-pgsgs [747.771869ms] +Oct 27 14:27:56.326: INFO: Created: latency-svc-j4rpl +Oct 27 14:27:56.350: INFO: Got endpoints: latency-svc-zh2gd [743.028021ms] +Oct 27 14:27:56.374: INFO: Created: latency-svc-plxjm +Oct 27 14:27:56.399: INFO: Got endpoints: latency-svc-22792 [748.997868ms] +Oct 27 14:27:56.417: INFO: Created: latency-svc-blvcj +Oct 27 14:27:56.449: INFO: Got endpoints: latency-svc-lzrjz [749.319155ms] +Oct 27 14:27:56.466: INFO: Created: latency-svc-8dnsq +Oct 27 14:27:56.501: INFO: Got endpoints: latency-svc-rm4lk [749.213947ms] +Oct 27 14:27:56.519: INFO: Created: latency-svc-x7qgq +Oct 27 14:27:56.549: INFO: Got endpoints: latency-svc-rqx8g [748.977649ms] +Oct 27 14:27:56.568: INFO: Created: latency-svc-gdp4m +Oct 27 14:27:56.600: INFO: Got endpoints: latency-svc-2t7p4 [749.929082ms] +Oct 27 14:27:56.620: INFO: Created: latency-svc-qjl24 +Oct 27 14:27:56.652: INFO: Got endpoints: latency-svc-98ffx [749.46326ms] +Oct 27 14:27:56.732: INFO: Created: latency-svc-j6zvg +Oct 27 14:27:56.735: INFO: Got endpoints: latency-svc-bjlzk [785.621013ms] +Oct 27 14:27:56.835: INFO: Got endpoints: latency-svc-7228f [835.447763ms] +Oct 27 14:27:56.836: INFO: Got endpoints: latency-svc-vh6kf [786.985641ms] +Oct 27 14:27:56.933: INFO: Got endpoints: latency-svc-lf5r5 [834.210673ms] +Oct 27 14:27:56.935: INFO: Got endpoints: latency-svc-7jc6v [783.075437ms] +Oct 27 14:27:57.033: INFO: Created: latency-svc-98plc +Oct 27 14:27:57.034: INFO: Got endpoints: latency-svc-92nbc [832.932369ms] +Oct 27 14:27:57.035: INFO: Got endpoints: latency-svc-h6cq2 [786.002747ms] +Oct 27 14:27:57.038: INFO: Created: latency-svc-cqnzl +Oct 27 14:27:57.043: INFO: Created: latency-svc-tpxd9 +Oct 27 14:27:57.048: INFO: Created: latency-svc-pvqqk +Oct 27 14:27:57.049: INFO: Got endpoints: latency-svc-j4rpl [748.654405ms] +Oct 27 14:27:57.054: INFO: Created: latency-svc-thqg8 +Oct 27 14:27:57.060: INFO: Created: latency-svc-l4dj6 +Oct 27 14:27:57.131: INFO: Created: latency-svc-j4n26 +Oct 27 14:27:57.132: INFO: Got endpoints: latency-svc-plxjm [781.92491ms] +Oct 27 14:27:57.137: INFO: Created: latency-svc-d4w56 +Oct 27 14:27:57.150: INFO: Got endpoints: latency-svc-blvcj [750.36536ms] +Oct 27 14:27:57.150: INFO: Created: latency-svc-jqz6k +Oct 27 14:27:57.167: INFO: Created: latency-svc-5dx82 +Oct 27 14:27:57.200: INFO: Got endpoints: latency-svc-8dnsq [750.776208ms] +Oct 27 14:27:57.218: INFO: Created: latency-svc-9jnkd +Oct 27 14:27:57.250: INFO: Got endpoints: latency-svc-x7qgq [749.041834ms] +Oct 27 14:27:57.300: INFO: Got endpoints: latency-svc-gdp4m [750.782129ms] +Oct 27 14:27:57.350: INFO: Got endpoints: latency-svc-qjl24 [749.460918ms] +Oct 27 14:27:57.399: INFO: Got endpoints: latency-svc-j6zvg [747.14816ms] +Oct 27 14:27:57.451: INFO: Got endpoints: latency-svc-98plc [715.558095ms] +Oct 27 14:27:57.499: INFO: Got endpoints: latency-svc-cqnzl [664.179509ms] +Oct 27 14:27:57.549: INFO: Got endpoints: latency-svc-tpxd9 [713.163253ms] +Oct 27 14:27:57.600: INFO: Got endpoints: latency-svc-pvqqk [664.421313ms] +Oct 27 14:27:57.650: INFO: Got endpoints: latency-svc-thqg8 [716.834205ms] +Oct 27 14:27:57.700: INFO: Got endpoints: latency-svc-l4dj6 [666.319086ms] +Oct 27 14:27:57.749: INFO: Got endpoints: latency-svc-j4n26 [713.811968ms] +Oct 27 14:27:57.801: INFO: Got endpoints: latency-svc-d4w56 [752.033752ms] +Oct 27 14:27:57.849: INFO: Got endpoints: latency-svc-jqz6k [716.706632ms] +Oct 27 14:27:57.900: INFO: Got endpoints: latency-svc-5dx82 [750.52764ms] +Oct 27 14:27:57.949: INFO: Got endpoints: latency-svc-9jnkd [749.480489ms] +Oct 27 14:27:57.949: INFO: Latencies: [22.423692ms 23.836112ms 28.677153ms 39.403203ms 76.929483ms 83.978004ms 86.989086ms 95.830097ms 99.549397ms 178.566917ms 187.575087ms 193.626817ms 212.702806ms 277.446988ms 290.250664ms 290.729566ms 298.114577ms 300.314111ms 301.10331ms 305.977947ms 307.003898ms 312.486288ms 312.541696ms 321.970828ms 322.608965ms 322.952184ms 324.585035ms 328.044908ms 355.114876ms 361.818571ms 362.50594ms 390.368902ms 391.65368ms 392.925303ms 393.450254ms 394.195802ms 394.617781ms 395.071968ms 395.437418ms 397.06511ms 397.630429ms 397.77062ms 398.23459ms 398.653911ms 399.46048ms 399.590439ms 400.350961ms 400.630361ms 402.59307ms 402.697895ms 403.213851ms 403.225801ms 405.418259ms 408.639954ms 414.526995ms 469.17817ms 495.375344ms 500.28252ms 502.877183ms 506.816554ms 508.790537ms 515.256536ms 588.403741ms 592.1531ms 598.257451ms 606.753416ms 619.937389ms 664.179509ms 664.421313ms 666.319086ms 684.788247ms 687.418302ms 696.569865ms 699.221807ms 705.314852ms 713.163253ms 713.811968ms 715.268821ms 715.290599ms 715.329689ms 715.558095ms 716.439374ms 716.706632ms 716.834205ms 716.95554ms 719.346999ms 719.853889ms 725.697101ms 743.028021ms 746.282931ms 746.898826ms 747.019461ms 747.14816ms 747.205059ms 747.298913ms 747.623682ms 747.771869ms 747.812077ms 747.824811ms 747.843882ms 748.130833ms 748.15987ms 748.191652ms 748.239951ms 748.313945ms 748.381575ms 748.444563ms 748.541955ms 748.654405ms 748.697844ms 748.787993ms 748.804995ms 748.828409ms 748.977649ms 748.997868ms 749.041834ms 749.213947ms 749.282521ms 749.299818ms 749.319155ms 749.405197ms 749.451929ms 749.460918ms 749.46326ms 749.480489ms 749.664982ms 749.769412ms 749.801553ms 749.838761ms 749.844772ms 749.863291ms 749.929082ms 749.964477ms 749.977894ms 750.094071ms 750.20363ms 750.243404ms 750.274577ms 750.36536ms 750.52764ms 750.693905ms 750.713229ms 750.719195ms 750.776208ms 750.782129ms 750.801229ms 750.831494ms 750.834022ms 750.869134ms 750.986879ms 751.295641ms 751.297488ms 751.802426ms 752.033752ms 752.140528ms 752.529814ms 752.6465ms 752.944955ms 752.973158ms 753.138646ms 753.393759ms 757.554624ms 768.04682ms 768.443397ms 768.571316ms 768.905933ms 769.063866ms 772.907888ms 777.346227ms 780.224127ms 780.773952ms 780.890001ms 781.92491ms 783.075437ms 783.737605ms 784.055988ms 784.116818ms 785.621013ms 786.002747ms 786.985641ms 787.980962ms 790.180648ms 790.357284ms 794.381826ms 794.729129ms 797.465926ms 797.69838ms 798.960084ms 799.343549ms 799.827527ms 800.085989ms 800.902014ms 801.532869ms 801.999884ms 803.006779ms 832.932369ms 834.210673ms 835.447763ms 878.199047ms 991.856002ms] +Oct 27 14:27:57.950: INFO: 50 %ile: 748.130833ms +Oct 27 14:27:57.950: INFO: 90 %ile: 787.980962ms +Oct 27 14:27:57.950: INFO: 99 %ile: 878.199047ms +Oct 27 14:27:57.950: INFO: Total sample count: 200 +[AfterEach] [sig-network] Service endpoints latency + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:27:57.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svc-latency-5361" for this suite. +•{"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":346,"completed":89,"skipped":1714,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] EndpointSlice + should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:27:57.985: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename endpointslice +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in endpointslice-8166 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 +[It] should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:28:00.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "endpointslice-8166" for this suite. +•{"msg":"PASSED [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","total":346,"completed":90,"skipped":1763,"failed":0} +SSSSSSSS +------------------------------ +[sig-node] Events + should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Events + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:28:00.308: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename events +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in events-1003 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pod +STEP: submitting the pod to kubernetes +STEP: verifying the pod is in kubernetes +STEP: retrieving the pod +Oct 27 14:28:02.572: INFO: &Pod{ObjectMeta:{send-events-2aed4560-f266-417d-807a-9eda130c924b events-1003 f2fb486b-bcd7-42e6-bf07-1b9074081367 14791 0 2021-10-27 14:28:00 +0000 UTC map[name:foo time:499685295] map[cni.projectcalico.org/containerID:422693ea9476c476c8a4fd0ca14835dfe28f4839860efa5fce524d617d89198d cni.projectcalico.org/podIP:100.96.1.90/32 cni.projectcalico.org/podIPs:100.96.1.90/32 kubernetes.io/psp:e2e-test-privileged-psp] [] [] [{e2e.test Update v1 2021-10-27 14:28:00 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-27 14:28:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-27 14:28:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.1.90\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-w6rrb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:p,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmtq6-2hp.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w6rrb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:28:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:28:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:28:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:28:00 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.3,PodIP:100.96.1.90,StartTime:2021-10-27 14:28:00 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-27 14:28:01 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:docker://64517ad50ea663d77cea68c335735336b6fdecfd49d15f33b52955992fdeb739,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.1.90,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + +STEP: checking for scheduler event about the pod +Oct 27 14:28:04.585: INFO: Saw scheduler event for our pod. +STEP: checking for kubelet event about the pod +Oct 27 14:28:06.631: INFO: Saw kubelet event for our pod. +STEP: deleting the pod +[AfterEach] [sig-node] Events + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:28:06.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "events-1003" for this suite. +•{"msg":"PASSED [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":346,"completed":91,"skipped":1771,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-node] Container Runtime blackbox test on terminated container + should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:28:06.682: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-runtime +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-9748 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the container +STEP: wait for the container to reach Succeeded +STEP: get the container status +STEP: the container should be terminated +STEP: the termination message should be set +Oct 27 14:28:10.099: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- +STEP: delete the container +[AfterEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:28:10.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-runtime-9748" for this suite. +•{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":346,"completed":92,"skipped":1784,"failed":0} +SSS +------------------------------ +[sig-storage] Secrets + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:28:10.164: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-1079 +STEP: Waiting for a default service account to be provisioned in namespace +[It] optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating secret with name s-test-opt-del-cb8dead0-3783-45e5-8775-7db9deb09b79 +STEP: Creating secret with name s-test-opt-upd-31417c96-fe2f-41e0-8036-533e7b2884ac +STEP: Creating the pod +Oct 27 14:28:10.425: INFO: The status of Pod pod-secrets-bfcd0f89-9abb-42c1-9cee-02258c5793c3 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:28:12.438: INFO: The status of Pod pod-secrets-bfcd0f89-9abb-42c1-9cee-02258c5793c3 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:28:14.439: INFO: The status of Pod pod-secrets-bfcd0f89-9abb-42c1-9cee-02258c5793c3 is Running (Ready = true) +STEP: Deleting secret s-test-opt-del-cb8dead0-3783-45e5-8775-7db9deb09b79 +STEP: Updating secret s-test-opt-upd-31417c96-fe2f-41e0-8036-533e7b2884ac +STEP: Creating secret with name s-test-opt-create-d540b1a8-513f-425f-917d-75b72cc62d77 +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:28:16.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-1079" for this suite. +•{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":93,"skipped":1787,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should delete pods created by rc when not orphaning [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:28:16.806: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename gc +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-7113 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should delete pods created by rc when not orphaning [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the rc +STEP: delete the rc +STEP: wait for all pods to be garbage collected +STEP: Gathering metrics +Oct 27 14:28:27.310: INFO: For apiserver_request_total: +For apiserver_request_latency_seconds: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:28:27.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +W1027 14:28:27.310147 5683 metrics_grabber.go:151] Can't find kube-controller-manager pod. Grabbing metrics from kube-controller-manager is disabled. +STEP: Destroying namespace "gc-7113" for this suite. +•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":346,"completed":94,"skipped":1811,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-scheduling] SchedulerPredicates [Serial] + validates that NodeSelector is respected if matching [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:28:27.345: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename sched-pred +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-pred-2040 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 +Oct 27 14:28:27.575: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready +Oct 27 14:28:27.600: INFO: Waiting for terminating namespaces to be deleted... +Oct 27 14:28:27.611: INFO: +Logging pods the apiserver thinks is on node shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5 before test +Oct 27 14:28:27.639: INFO: addons-nginx-ingress-nginx-ingress-k8s-backend-56d9d84c8c-vv84b from kube-system started at 2021-10-27 13:56:22 +0000 UTC (1 container statuses recorded) +Oct 27 14:28:27.639: INFO: Container nginx-ingress-nginx-ingress-k8s-backend ready: true, restart count 0 +Oct 27 14:28:27.639: INFO: apiserver-proxy-sl296 from kube-system started at 2021-10-27 13:56:02 +0000 UTC (2 container statuses recorded) +Oct 27 14:28:27.639: INFO: Container proxy ready: true, restart count 0 +Oct 27 14:28:27.639: INFO: Container sidecar ready: true, restart count 0 +Oct 27 14:28:27.639: INFO: calico-node-4h2tf from kube-system started at 2021-10-27 13:58:05 +0000 UTC (1 container statuses recorded) +Oct 27 14:28:27.639: INFO: Container calico-node ready: true, restart count 0 +Oct 27 14:28:27.639: INFO: calico-node-vertical-autoscaler-785b5f968-9qxv8 from kube-system started at 2021-10-27 13:56:22 +0000 UTC (1 container statuses recorded) +Oct 27 14:28:27.639: INFO: Container autoscaler ready: true, restart count 0 +Oct 27 14:28:27.639: INFO: calico-typha-horizontal-autoscaler-5b58bb446c-s7nwv from kube-system started at 2021-10-27 13:56:22 +0000 UTC (1 container statuses recorded) +Oct 27 14:28:27.639: INFO: Container autoscaler ready: true, restart count 0 +Oct 27 14:28:27.639: INFO: calico-typha-vertical-autoscaler-5c9655cddd-qxmpq from kube-system started at 2021-10-27 13:56:22 +0000 UTC (1 container statuses recorded) +Oct 27 14:28:27.639: INFO: Container autoscaler ready: true, restart count 0 +Oct 27 14:28:27.639: INFO: coredns-6944b5cf58-cqcmx from kube-system started at 2021-10-27 13:56:22 +0000 UTC (1 container statuses recorded) +Oct 27 14:28:27.639: INFO: Container coredns ready: true, restart count 0 +Oct 27 14:28:27.639: INFO: coredns-6944b5cf58-qwp9p from kube-system started at 2021-10-27 13:56:22 +0000 UTC (1 container statuses recorded) +Oct 27 14:28:27.639: INFO: Container coredns ready: true, restart count 0 +Oct 27 14:28:27.639: INFO: csi-driver-node-l4n7m from kube-system started at 2021-10-27 13:56:02 +0000 UTC (3 container statuses recorded) +Oct 27 14:28:27.639: INFO: Container csi-driver ready: true, restart count 0 +Oct 27 14:28:27.639: INFO: Container csi-liveness-probe ready: true, restart count 0 +Oct 27 14:28:27.639: INFO: Container csi-node-driver-registrar ready: true, restart count 0 +Oct 27 14:28:27.639: INFO: kube-proxy-85xr2 from kube-system started at 2021-10-27 13:59:36 +0000 UTC (2 container statuses recorded) +Oct 27 14:28:27.639: INFO: Container conntrack-fix ready: true, restart count 0 +Oct 27 14:28:27.639: INFO: Container kube-proxy ready: true, restart count 0 +Oct 27 14:28:27.639: INFO: metrics-server-6b8fdcd747-t4xbj from kube-system started at 2021-10-27 13:56:22 +0000 UTC (1 container statuses recorded) +Oct 27 14:28:27.639: INFO: Container metrics-server ready: true, restart count 0 +Oct 27 14:28:27.639: INFO: node-exporter-cwjxv from kube-system started at 2021-10-27 13:56:02 +0000 UTC (1 container statuses recorded) +Oct 27 14:28:27.639: INFO: Container node-exporter ready: true, restart count 0 +Oct 27 14:28:27.639: INFO: node-problem-detector-g5rmr from kube-system started at 2021-10-27 14:24:37 +0000 UTC (1 container statuses recorded) +Oct 27 14:28:27.639: INFO: Container node-problem-detector ready: true, restart count 0 +Oct 27 14:28:27.639: INFO: vpn-shoot-77b49d5987-8ddn6 from kube-system started at 2021-10-27 13:56:22 +0000 UTC (1 container statuses recorded) +Oct 27 14:28:27.639: INFO: Container vpn-shoot ready: true, restart count 0 +Oct 27 14:28:27.639: INFO: dashboard-metrics-scraper-7ccbfc448f-l8nhq from kubernetes-dashboard started at 2021-10-27 13:56:22 +0000 UTC (1 container statuses recorded) +Oct 27 14:28:27.639: INFO: Container dashboard-metrics-scraper ready: true, restart count 0 +Oct 27 14:28:27.639: INFO: kubernetes-dashboard-7888b55b49-xptfd from kubernetes-dashboard started at 2021-10-27 13:56:22 +0000 UTC (1 container statuses recorded) +Oct 27 14:28:27.639: INFO: Container kubernetes-dashboard ready: true, restart count 2 +Oct 27 14:28:27.639: INFO: +Logging pods the apiserver thinks is on node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc before test +Oct 27 14:28:27.657: INFO: send-events-2aed4560-f266-417d-807a-9eda130c924b from events-1003 started at 2021-10-27 14:28:00 +0000 UTC (1 container statuses recorded) +Oct 27 14:28:27.657: INFO: Container p ready: true, restart count 0 +Oct 27 14:28:27.657: INFO: addons-nginx-ingress-controller-d5756fc97-fcls5 from kube-system started at 2021-10-27 14:02:35 +0000 UTC (1 container statuses recorded) +Oct 27 14:28:27.657: INFO: Container nginx-ingress-controller ready: true, restart count 0 +Oct 27 14:28:27.657: INFO: apiserver-proxy-z9z6b from kube-system started at 2021-10-27 13:56:05 +0000 UTC (2 container statuses recorded) +Oct 27 14:28:27.657: INFO: Container proxy ready: true, restart count 0 +Oct 27 14:28:27.657: INFO: Container sidecar ready: true, restart count 0 +Oct 27 14:28:27.657: INFO: blackbox-exporter-65c549b94c-rjgf7 from kube-system started at 2021-10-27 14:03:35 +0000 UTC (1 container statuses recorded) +Oct 27 14:28:27.657: INFO: Container blackbox-exporter ready: true, restart count 0 +Oct 27 14:28:27.657: INFO: calico-kube-controllers-56bcbfb5c5-f9t75 from kube-system started at 2021-10-27 13:56:06 +0000 UTC (1 container statuses recorded) +Oct 27 14:28:27.657: INFO: Container calico-kube-controllers ready: true, restart count 0 +Oct 27 14:28:27.657: INFO: calico-node-7gp7f from kube-system started at 2021-10-27 13:56:05 +0000 UTC (1 container statuses recorded) +Oct 27 14:28:27.657: INFO: Container calico-node ready: true, restart count 0 +Oct 27 14:28:27.657: INFO: calico-typha-deploy-546b97d4b5-z8pql from kube-system started at 2021-10-27 13:56:06 +0000 UTC (1 container statuses recorded) +Oct 27 14:28:27.657: INFO: Container calico-typha ready: true, restart count 0 +Oct 27 14:28:27.657: INFO: csi-driver-node-4sm4p from kube-system started at 2021-10-27 13:56:05 +0000 UTC (3 container statuses recorded) +Oct 27 14:28:27.657: INFO: Container csi-driver ready: true, restart count 0 +Oct 27 14:28:27.657: INFO: Container csi-liveness-probe ready: true, restart count 0 +Oct 27 14:28:27.657: INFO: Container csi-node-driver-registrar ready: true, restart count 0 +Oct 27 14:28:27.657: INFO: kube-proxy-j2k28 from kube-system started at 2021-10-27 13:59:36 +0000 UTC (2 container statuses recorded) +Oct 27 14:28:27.657: INFO: Container conntrack-fix ready: true, restart count 0 +Oct 27 14:28:27.657: INFO: Container kube-proxy ready: true, restart count 0 +Oct 27 14:28:27.657: INFO: node-exporter-zsjq5 from kube-system started at 2021-10-27 13:56:05 +0000 UTC (1 container statuses recorded) +Oct 27 14:28:27.657: INFO: Container node-exporter ready: true, restart count 0 +Oct 27 14:28:27.657: INFO: node-problem-detector-9pkv8 from kube-system started at 2021-10-27 14:24:37 +0000 UTC (1 container statuses recorded) +Oct 27 14:28:27.657: INFO: Container node-problem-detector ready: true, restart count 0 +[It] validates that NodeSelector is respected if matching [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Trying to launch a pod without a label to get a node which can launch it. +STEP: Explicitly delete pod here to free the resource it takes. +STEP: Trying to apply a random label on the found node. +STEP: verifying the node has the label kubernetes.io/e2e-7dca7e5d-dbe5-4b98-b302-93f47c6f6c71 42 +STEP: Trying to relaunch the pod, now with labels. +STEP: removing the label kubernetes.io/e2e-7dca7e5d-dbe5-4b98-b302-93f47c6f6c71 off the node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc +STEP: verifying the node doesn't have the label kubernetes.io/e2e-7dca7e5d-dbe5-4b98-b302-93f47c6f6c71 +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:28:31.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-pred-2040" for this suite. +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 +•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":346,"completed":95,"skipped":1842,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + binary data should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:28:31.896: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-8554 +STEP: Waiting for a default service account to be provisioned in namespace +[It] binary data should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name configmap-test-upd-2b9ef4b7-4b10-446d-aac2-227bece69546 +STEP: Creating the pod +STEP: Waiting for pod with text data +STEP: Waiting for pod with binary data +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:28:36.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-8554" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":96,"skipped":1869,"failed":0} +SSS +------------------------------ +[sig-auth] ServiceAccounts + should mount an API token into pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:28:36.282: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename svcaccounts +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in svcaccounts-6783 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should mount an API token into pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: getting the auto-created API token +STEP: reading a file in the container +Oct 27 14:28:41.053: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl exec --namespace=svcaccounts-6783 pod-service-account-2ef6b183-74bb-4c1b-8648-5592637604ed -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' +STEP: reading a file in the container +Oct 27 14:28:41.396: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl exec --namespace=svcaccounts-6783 pod-service-account-2ef6b183-74bb-4c1b-8648-5592637604ed -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' +STEP: reading a file in the container +Oct 27 14:28:41.760: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl exec --namespace=svcaccounts-6783 pod-service-account-2ef6b183-74bb-4c1b-8648-5592637604ed -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' +[AfterEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:28:42.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svcaccounts-6783" for this suite. +•{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":346,"completed":97,"skipped":1872,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Variable Expansion + should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:28:42.357: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename var-expansion +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in var-expansion-1696 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:28:44.641: INFO: Deleting pod "var-expansion-e9684b74-84bd-4189-a949-cd97fe48c12e" in namespace "var-expansion-1696" +Oct 27 14:28:44.655: INFO: Wait up to 5m0s for pod "var-expansion-e9684b74-84bd-4189-a949-cd97fe48c12e" to be fully deleted +[AfterEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:28:48.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-1696" for this suite. +•{"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]","total":346,"completed":98,"skipped":1914,"failed":0} +SSSSSSSS +------------------------------ +[sig-api-machinery] Watchers + should observe an object deletion if it stops meeting the requirements of the selector [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:28:48.714: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename watch +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in watch-407 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a watch on configmaps with a certain label +STEP: creating a new configmap +STEP: modifying the configmap once +STEP: changing the label value of the configmap +STEP: Expecting to observe a delete notification for the watched object +Oct 27 14:28:48.975: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-407 396d393d-22be-41a6-b15e-be2498c2bd8e 16038 0 2021-10-27 14:28:48 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-10-27 14:28:48 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 27 14:28:48.976: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-407 396d393d-22be-41a6-b15e-be2498c2bd8e 16039 0 2021-10-27 14:28:48 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-10-27 14:28:48 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 27 14:28:48.976: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-407 396d393d-22be-41a6-b15e-be2498c2bd8e 16040 0 2021-10-27 14:28:48 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-10-27 14:28:48 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: modifying the configmap a second time +STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements +STEP: changing the label value of the configmap back +STEP: modifying the configmap a third time +STEP: deleting the configmap +STEP: Expecting to observe an add notification for the watched object when the label value was restored +Oct 27 14:28:59.061: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-407 396d393d-22be-41a6-b15e-be2498c2bd8e 16092 0 2021-10-27 14:28:48 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-10-27 14:28:48 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 27 14:28:59.061: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-407 396d393d-22be-41a6-b15e-be2498c2bd8e 16093 0 2021-10-27 14:28:48 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-10-27 14:28:48 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 27 14:28:59.061: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-407 396d393d-22be-41a6-b15e-be2498c2bd8e 16094 0 2021-10-27 14:28:48 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-10-27 14:28:48 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} +[AfterEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:28:59.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "watch-407" for this suite. +•{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":346,"completed":99,"skipped":1922,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:28:59.094: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-480 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating service in namespace services-480 +Oct 27 14:28:59.361: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:29:01.373: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true) +Oct 27 14:29:01.384: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-480 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' +Oct 27 14:29:01.758: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" +Oct 27 14:29:01.758: INFO: stdout: "iptables" +Oct 27 14:29:01.758: INFO: proxyMode: iptables +Oct 27 14:29:01.774: INFO: Waiting for pod kube-proxy-mode-detector to disappear +Oct 27 14:29:01.784: INFO: Pod kube-proxy-mode-detector no longer exists +STEP: creating service affinity-nodeport-timeout in namespace services-480 +STEP: creating replication controller affinity-nodeport-timeout in namespace services-480 +I1027 14:29:01.817823 5683 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-480, replica count: 3 +I1027 14:29:04.869170 5683 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Oct 27 14:29:04.911: INFO: Creating new exec pod +Oct 27 14:29:09.975: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-480 exec execpod-affinityqn8nx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' +Oct 27 14:29:10.379: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" +Oct 27 14:29:10.379: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 14:29:10.379: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-480 exec execpod-affinityqn8nx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.71.27.7 80' +Oct 27 14:29:10.693: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 100.71.27.7 80\nConnection to 100.71.27.7 80 port [tcp/http] succeeded!\n" +Oct 27 14:29:10.693: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 14:29:10.693: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-480 exec execpod-affinityqn8nx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.250.0.2 30742' +Oct 27 14:29:11.124: INFO: stderr: "+ nc -v -t -w 2 10.250.0.2 30742\nConnection to 10.250.0.2 30742 port [tcp/*] succeeded!\n+ echo hostName\n" +Oct 27 14:29:11.124: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 14:29:11.124: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-480 exec execpod-affinityqn8nx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.250.0.3 30742' +Oct 27 14:29:11.503: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.250.0.3 30742\nConnection to 10.250.0.3 30742 port [tcp/*] succeeded!\n" +Oct 27 14:29:11.503: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 14:29:11.503: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-480 exec execpod-affinityqn8nx -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.250.0.2:30742/ ; done' +Oct 27 14:29:11.962: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:30742/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:30742/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:30742/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:30742/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:30742/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:30742/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:30742/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:30742/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:30742/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:30742/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:30742/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:30742/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:30742/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:30742/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:30742/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:30742/\n" +Oct 27 14:29:11.962: INFO: stdout: "\naffinity-nodeport-timeout-52mxp\naffinity-nodeport-timeout-52mxp\naffinity-nodeport-timeout-52mxp\naffinity-nodeport-timeout-52mxp\naffinity-nodeport-timeout-52mxp\naffinity-nodeport-timeout-52mxp\naffinity-nodeport-timeout-52mxp\naffinity-nodeport-timeout-52mxp\naffinity-nodeport-timeout-52mxp\naffinity-nodeport-timeout-52mxp\naffinity-nodeport-timeout-52mxp\naffinity-nodeport-timeout-52mxp\naffinity-nodeport-timeout-52mxp\naffinity-nodeport-timeout-52mxp\naffinity-nodeport-timeout-52mxp\naffinity-nodeport-timeout-52mxp" +Oct 27 14:29:11.962: INFO: Received response from host: affinity-nodeport-timeout-52mxp +Oct 27 14:29:11.962: INFO: Received response from host: affinity-nodeport-timeout-52mxp +Oct 27 14:29:11.962: INFO: Received response from host: affinity-nodeport-timeout-52mxp +Oct 27 14:29:11.962: INFO: Received response from host: affinity-nodeport-timeout-52mxp +Oct 27 14:29:11.962: INFO: Received response from host: affinity-nodeport-timeout-52mxp +Oct 27 14:29:11.962: INFO: Received response from host: affinity-nodeport-timeout-52mxp +Oct 27 14:29:11.962: INFO: Received response from host: affinity-nodeport-timeout-52mxp +Oct 27 14:29:11.962: INFO: Received response from host: affinity-nodeport-timeout-52mxp +Oct 27 14:29:11.962: INFO: Received response from host: affinity-nodeport-timeout-52mxp +Oct 27 14:29:11.962: INFO: Received response from host: affinity-nodeport-timeout-52mxp +Oct 27 14:29:11.962: INFO: Received response from host: affinity-nodeport-timeout-52mxp +Oct 27 14:29:11.962: INFO: Received response from host: affinity-nodeport-timeout-52mxp +Oct 27 14:29:11.962: INFO: Received response from host: affinity-nodeport-timeout-52mxp +Oct 27 14:29:11.962: INFO: Received response from host: affinity-nodeport-timeout-52mxp +Oct 27 14:29:11.962: INFO: Received response from host: affinity-nodeport-timeout-52mxp +Oct 27 14:29:11.962: INFO: Received response from host: affinity-nodeport-timeout-52mxp +Oct 27 14:29:11.962: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-480 exec execpod-affinityqn8nx -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.250.0.2:30742/' +Oct 27 14:29:12.336: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.250.0.2:30742/\n" +Oct 27 14:29:12.337: INFO: stdout: "affinity-nodeport-timeout-52mxp" +Oct 27 14:29:32.337: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-480 exec execpod-affinityqn8nx -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.250.0.2:30742/' +Oct 27 14:29:32.762: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.250.0.2:30742/\n" +Oct 27 14:29:32.762: INFO: stdout: "affinity-nodeport-timeout-52mxp" +Oct 27 14:29:52.762: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-480 exec execpod-affinityqn8nx -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.250.0.2:30742/' +Oct 27 14:29:53.104: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.250.0.2:30742/\n" +Oct 27 14:29:53.104: INFO: stdout: "affinity-nodeport-timeout-j89dd" +Oct 27 14:29:53.104: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-480, will wait for the garbage collector to delete the pods +Oct 27 14:29:53.196: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 13.528077ms +Oct 27 14:29:53.297: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 100.789425ms +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:29:55.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-480" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":346,"completed":100,"skipped":1945,"failed":0} +SSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for multiple CRDs of same group but different versions [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:29:55.356: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-3739 +STEP: Waiting for a default service account to be provisioned in namespace +[It] works for multiple CRDs of same group but different versions [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation +Oct 27 14:29:55.629: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation +Oct 27 14:30:09.766: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 14:30:14.429: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:30:30.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-3739" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":346,"completed":101,"skipped":1952,"failed":0} +SSS +------------------------------ +[sig-network] Services + should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:30:30.523: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-9869 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating service in namespace services-9869 +STEP: creating service affinity-clusterip in namespace services-9869 +STEP: creating replication controller affinity-clusterip in namespace services-9869 +I1027 14:30:30.753566 5683 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-9869, replica count: 3 +I1027 14:30:33.804078 5683 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Oct 27 14:30:33.826: INFO: Creating new exec pod +Oct 27 14:30:36.867: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-9869 exec execpod-affinityf28k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' +Oct 27 14:30:37.209: INFO: stderr: "+ nc -v -t+ -w 2echo affinity-clusterip hostName 80\n\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" +Oct 27 14:30:37.209: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 14:30:37.209: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-9869 exec execpod-affinityf28k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.69.238.185 80' +Oct 27 14:30:37.611: INFO: stderr: "+ nc -v -t -w 2 100.69.238.185 80\nConnection to 100.69.238.185 80 port [tcp/http] succeeded!\n+ echo hostName\n" +Oct 27 14:30:37.611: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 14:30:37.611: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-9869 exec execpod-affinityf28k5 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://100.69.238.185:80/ ; done' +Oct 27 14:30:38.078: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.69.238.185:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.69.238.185:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.69.238.185:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.69.238.185:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.69.238.185:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.69.238.185:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.69.238.185:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.69.238.185:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.69.238.185:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.69.238.185:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.69.238.185:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.69.238.185:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.69.238.185:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.69.238.185:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.69.238.185:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.69.238.185:80/\n" +Oct 27 14:30:38.078: INFO: stdout: "\naffinity-clusterip-7r9q7\naffinity-clusterip-7r9q7\naffinity-clusterip-7r9q7\naffinity-clusterip-7r9q7\naffinity-clusterip-7r9q7\naffinity-clusterip-7r9q7\naffinity-clusterip-7r9q7\naffinity-clusterip-7r9q7\naffinity-clusterip-7r9q7\naffinity-clusterip-7r9q7\naffinity-clusterip-7r9q7\naffinity-clusterip-7r9q7\naffinity-clusterip-7r9q7\naffinity-clusterip-7r9q7\naffinity-clusterip-7r9q7\naffinity-clusterip-7r9q7" +Oct 27 14:30:38.078: INFO: Received response from host: affinity-clusterip-7r9q7 +Oct 27 14:30:38.078: INFO: Received response from host: affinity-clusterip-7r9q7 +Oct 27 14:30:38.078: INFO: Received response from host: affinity-clusterip-7r9q7 +Oct 27 14:30:38.078: INFO: Received response from host: affinity-clusterip-7r9q7 +Oct 27 14:30:38.078: INFO: Received response from host: affinity-clusterip-7r9q7 +Oct 27 14:30:38.078: INFO: Received response from host: affinity-clusterip-7r9q7 +Oct 27 14:30:38.078: INFO: Received response from host: affinity-clusterip-7r9q7 +Oct 27 14:30:38.078: INFO: Received response from host: affinity-clusterip-7r9q7 +Oct 27 14:30:38.078: INFO: Received response from host: affinity-clusterip-7r9q7 +Oct 27 14:30:38.078: INFO: Received response from host: affinity-clusterip-7r9q7 +Oct 27 14:30:38.078: INFO: Received response from host: affinity-clusterip-7r9q7 +Oct 27 14:30:38.078: INFO: Received response from host: affinity-clusterip-7r9q7 +Oct 27 14:30:38.078: INFO: Received response from host: affinity-clusterip-7r9q7 +Oct 27 14:30:38.078: INFO: Received response from host: affinity-clusterip-7r9q7 +Oct 27 14:30:38.078: INFO: Received response from host: affinity-clusterip-7r9q7 +Oct 27 14:30:38.078: INFO: Received response from host: affinity-clusterip-7r9q7 +Oct 27 14:30:38.078: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-clusterip in namespace services-9869, will wait for the garbage collector to delete the pods +Oct 27 14:30:38.176: INFO: Deleting ReplicationController affinity-clusterip took: 17.440927ms +Oct 27 14:30:38.277: INFO: Terminating ReplicationController affinity-clusterip pods took: 100.742048ms +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:30:41.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-9869" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":346,"completed":102,"skipped":1955,"failed":0} +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Aggregator + Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Aggregator + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:30:41.134: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename aggregator +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in aggregator-9182 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] Aggregator + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:77 +Oct 27 14:30:41.325: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +[It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Registering the sample API server. +Oct 27 14:30:41.635: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set +Oct 27 14:30:43.740: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941841, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941841, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941841, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941841, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 27 14:30:45.754: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941841, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941841, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941841, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941841, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 27 14:30:47.753: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941841, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941841, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941841, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941841, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 27 14:30:49.752: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941841, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941841, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941841, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941841, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 27 14:30:51.754: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941841, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941841, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941841, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941841, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 27 14:30:53.754: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941841, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941841, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941841, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770941841, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 27 14:30:57.430: INFO: Waited 1.660677358s for the sample-apiserver to be ready to handle requests. +STEP: Read Status for v1alpha1.wardle.example.com +STEP: kubectl patch apiservice v1alpha1.wardle.example.com -p '{"spec":{"versionPriority": 400}}' +STEP: List APIServices +Oct 27 14:30:57.813: INFO: Found v1alpha1.wardle.example.com in APIServiceList +[AfterEach] [sig-api-machinery] Aggregator + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:68 +[AfterEach] [sig-api-machinery] Aggregator + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:30:58.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "aggregator-9182" for this suite. +•{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":346,"completed":103,"skipped":1973,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-network] Networking Granular Checks: Pods + should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Networking + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:30:58.432: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename pod-network-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pod-network-test-7420 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Performing setup for networking test in namespace pod-network-test-7420 +STEP: creating a selector +STEP: Creating the service pods in kubernetes +Oct 27 14:30:58.661: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +Oct 27 14:30:59.133: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:31:01.144: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:31:03.145: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:31:05.146: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:31:07.145: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:31:09.146: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:31:11.146: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:31:13.146: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:31:15.146: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:31:17.145: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:31:19.147: INFO: The status of Pod netserver-0 is Running (Ready = true) +Oct 27 14:31:19.171: INFO: The status of Pod netserver-1 is Running (Ready = true) +STEP: Creating test pods +Oct 27 14:31:23.378: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 +Oct 27 14:31:23.378: INFO: Going to poll 100.96.0.52 on port 8081 at least 0 times, with a maximum of 34 tries before failing +Oct 27 14:31:23.389: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.0.52 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7420 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 14:31:23.389: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 14:31:24.689: INFO: Found all 1 expected endpoints: [netserver-0] +Oct 27 14:31:24.689: INFO: Going to poll 100.96.1.105 on port 8081 at least 0 times, with a maximum of 34 tries before failing +Oct 27 14:31:24.701: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.1.105 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7420 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 14:31:24.701: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 14:31:25.918: INFO: Found all 1 expected endpoints: [netserver-1] +[AfterEach] [sig-network] Networking + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:31:25.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pod-network-test-7420" for this suite. +•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":104,"skipped":1983,"failed":0} +SSSS +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:31:25.952: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-5536 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating secret with name secret-test-aaf7ee0f-fc7b-46d0-9910-f19b03860896 +STEP: Creating a pod to test consume secrets +Oct 27 14:31:26.174: INFO: Waiting up to 5m0s for pod "pod-secrets-a048dcfc-a603-4cf3-b0f4-701ab82a4fca" in namespace "secrets-5536" to be "Succeeded or Failed" +Oct 27 14:31:26.185: INFO: Pod "pod-secrets-a048dcfc-a603-4cf3-b0f4-701ab82a4fca": Phase="Pending", Reason="", readiness=false. Elapsed: 10.732678ms +Oct 27 14:31:28.198: INFO: Pod "pod-secrets-a048dcfc-a603-4cf3-b0f4-701ab82a4fca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.024350223s +STEP: Saw pod success +Oct 27 14:31:28.198: INFO: Pod "pod-secrets-a048dcfc-a603-4cf3-b0f4-701ab82a4fca" satisfied condition "Succeeded or Failed" +Oct 27 14:31:28.210: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod pod-secrets-a048dcfc-a603-4cf3-b0f4-701ab82a4fca container secret-volume-test: +STEP: delete the pod +Oct 27 14:31:28.251: INFO: Waiting for pod pod-secrets-a048dcfc-a603-4cf3-b0f4-701ab82a4fca to disappear +Oct 27 14:31:28.262: INFO: Pod pod-secrets-a048dcfc-a603-4cf3-b0f4-701ab82a4fca no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:31:28.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-5536" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":105,"skipped":1987,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Kubelet when scheduling a busybox Pod with hostAliases + should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:31:28.297: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubelet-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubelet-test-1726 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 +[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:31:28.523: INFO: The status of Pod busybox-host-aliasesec7c22a0-d9b7-4a78-9648-46fc2abd2b3b is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:31:30.535: INFO: The status of Pod busybox-host-aliasesec7c22a0-d9b7-4a78-9648-46fc2abd2b3b is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:31:32.536: INFO: The status of Pod busybox-host-aliasesec7c22a0-d9b7-4a78-9648-46fc2abd2b3b is Running (Ready = true) +[AfterEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:31:32.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubelet-test-1726" for this suite. +•{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":106,"skipped":2063,"failed":0} +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] DisruptionController + should create a PodDisruptionBudget [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:31:32.619: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename disruption +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in disruption-2727 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 +[It] should create a PodDisruptionBudget [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pdb +STEP: Waiting for the pdb to be processed +STEP: updating the pdb +STEP: Waiting for the pdb to be processed +STEP: patching the pdb +STEP: Waiting for the pdb to be processed +STEP: Waiting for the pdb to be deleted +[AfterEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:31:32.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "disruption-2727" for this suite. +•{"msg":"PASSED [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]","total":346,"completed":107,"skipped":2080,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-node] InitContainer [NodeConformance] + should invoke init containers on a RestartAlways pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:31:32.956: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename init-container +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in init-container-1225 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 +[It] should invoke init containers on a RestartAlways pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pod +Oct 27 14:31:33.144: INFO: PodSpec: initContainers in spec.initContainers +[AfterEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:31:39.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "init-container-1225" for this suite. +•{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":346,"completed":108,"skipped":2090,"failed":0} +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a service. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:31:39.644: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename resourcequota +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-1414 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should create a ResourceQuota and capture the life of a service. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Counting existing ResourceQuota +STEP: Creating a ResourceQuota +STEP: Ensuring resource quota status is calculated +STEP: Creating a Service +STEP: Creating a NodePort Service +STEP: Not allowing a LoadBalancer Service with NodePort to be created that exceeds remaining quota +STEP: Ensuring resource quota status captures service creation +STEP: Deleting Services +STEP: Ensuring resource quota status released usage +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:31:51.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-1414" for this suite. +•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":346,"completed":109,"skipped":2109,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should run and stop simple daemon [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:31:51.058: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename daemonsets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in daemonsets-3281 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:142 +[It] should run and stop simple daemon [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating simple DaemonSet "daemon-set" +STEP: Check that daemon pods launch on every node of the cluster. +Oct 27 14:31:51.337: INFO: Number of nodes with available pods: 0 +Oct 27 14:31:51.337: INFO: Node shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5 is running more than one daemon pod +Oct 27 14:31:52.376: INFO: Number of nodes with available pods: 0 +Oct 27 14:31:52.376: INFO: Node shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5 is running more than one daemon pod +Oct 27 14:31:53.370: INFO: Number of nodes with available pods: 1 +Oct 27 14:31:53.370: INFO: Node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc is running more than one daemon pod +Oct 27 14:31:54.372: INFO: Number of nodes with available pods: 2 +Oct 27 14:31:54.372: INFO: Number of running nodes: 2, number of available pods: 2 +STEP: Stop a daemon pod, check that the daemon pod is revived. +Oct 27 14:31:54.434: INFO: Number of nodes with available pods: 1 +Oct 27 14:31:54.434: INFO: Node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc is running more than one daemon pod +Oct 27 14:31:55.466: INFO: Number of nodes with available pods: 1 +Oct 27 14:31:55.467: INFO: Node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc is running more than one daemon pod +Oct 27 14:31:56.468: INFO: Number of nodes with available pods: 1 +Oct 27 14:31:56.468: INFO: Node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc is running more than one daemon pod +Oct 27 14:31:57.468: INFO: Number of nodes with available pods: 1 +Oct 27 14:31:57.468: INFO: Node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc is running more than one daemon pod +Oct 27 14:31:58.467: INFO: Number of nodes with available pods: 1 +Oct 27 14:31:58.467: INFO: Node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc is running more than one daemon pod +Oct 27 14:31:59.466: INFO: Number of nodes with available pods: 2 +Oct 27 14:31:59.466: INFO: Number of running nodes: 2, number of available pods: 2 +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:108 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3281, will wait for the garbage collector to delete the pods +Oct 27 14:31:59.552: INFO: Deleting DaemonSet.extensions daemon-set took: 12.770116ms +Oct 27 14:31:59.652: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.645284ms +Oct 27 14:32:02.064: INFO: Number of nodes with available pods: 0 +Oct 27 14:32:02.064: INFO: Number of running nodes: 0, number of available pods: 0 +Oct 27 14:32:02.103: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"17457"},"items":null} + +Oct 27 14:32:02.114: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"17462"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:32:02.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-3281" for this suite. +•{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":346,"completed":110,"skipped":2125,"failed":0} +SSS +------------------------------ +[sig-api-machinery] Watchers + should be able to start watching from a specific resource version [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:32:02.184: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename watch +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in watch-2108 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to start watching from a specific resource version [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a new configmap +STEP: modifying the configmap once +STEP: modifying the configmap a second time +STEP: deleting the configmap +STEP: creating a watch on configmaps from the resource version returned by the first update +STEP: Expecting to observe notifications for all changes to the configmap after the first update +Oct 27 14:32:02.453: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-2108 c2e6c3bc-4708-4c0f-aab4-8a9f6743f16c 17473 0 2021-10-27 14:32:02 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2021-10-27 14:32:02 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 27 14:32:02.453: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-2108 c2e6c3bc-4708-4c0f-aab4-8a9f6743f16c 17474 0 2021-10-27 14:32:02 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2021-10-27 14:32:02 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +[AfterEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:32:02.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "watch-2108" for this suite. +•{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":346,"completed":111,"skipped":2128,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch + watch on custom resource definition objects [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:32:02.478: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename crd-watch +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-watch-8104 +STEP: Waiting for a default service account to be provisioned in namespace +[It] watch on custom resource definition objects [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:32:02.665: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Creating first CR +Oct 27 14:32:04.785: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-10-27T14:32:04Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-10-27T14:32:04Z]] name:name1 resourceVersion:17495 uid:3a30c187-54ac-403f-8c06-6e5e1046025e] num:map[num1:9223372036854775807 num2:1000000]]} +STEP: Creating second CR +Oct 27 14:32:14.800: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-10-27T14:32:14Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-10-27T14:32:14Z]] name:name2 resourceVersion:17555 uid:f1ad325f-4304-473b-869b-d312fa641013] num:map[num1:9223372036854775807 num2:1000000]]} +STEP: Modifying first CR +Oct 27 14:32:24.817: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-10-27T14:32:04Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-10-27T14:32:24Z]] name:name1 resourceVersion:17599 uid:3a30c187-54ac-403f-8c06-6e5e1046025e] num:map[num1:9223372036854775807 num2:1000000]]} +STEP: Modifying second CR +Oct 27 14:32:34.834: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-10-27T14:32:14Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-10-27T14:32:34Z]] name:name2 resourceVersion:17643 uid:f1ad325f-4304-473b-869b-d312fa641013] num:map[num1:9223372036854775807 num2:1000000]]} +STEP: Deleting first CR +Oct 27 14:32:44.860: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-10-27T14:32:04Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-10-27T14:32:24Z]] name:name1 resourceVersion:17709 uid:3a30c187-54ac-403f-8c06-6e5e1046025e] num:map[num1:9223372036854775807 num2:1000000]]} +STEP: Deleting second CR +Oct 27 14:32:54.877: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-10-27T14:32:14Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-10-27T14:32:34Z]] name:name2 resourceVersion:17753 uid:f1ad325f-4304-473b-869b-d312fa641013] num:map[num1:9223372036854775807 num2:1000000]]} +[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:33:05.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-watch-8104" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":346,"completed":112,"skipped":2142,"failed":0} +SS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for CRD with validation schema [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:33:05.448: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-616 +STEP: Waiting for a default service account to be provisioned in namespace +[It] works for CRD with validation schema [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:33:05.657: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: client-side validation (kubectl create and apply) allows request with known and required properties +Oct 27 14:33:13.325: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-616 --namespace=crd-publish-openapi-616 create -f -' +Oct 27 14:33:13.851: INFO: stderr: "" +Oct 27 14:33:13.852: INFO: stdout: "e2e-test-crd-publish-openapi-3510-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" +Oct 27 14:33:13.852: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-616 --namespace=crd-publish-openapi-616 delete e2e-test-crd-publish-openapi-3510-crds test-foo' +Oct 27 14:33:13.953: INFO: stderr: "" +Oct 27 14:33:13.953: INFO: stdout: "e2e-test-crd-publish-openapi-3510-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" +Oct 27 14:33:13.953: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-616 --namespace=crd-publish-openapi-616 apply -f -' +Oct 27 14:33:14.166: INFO: stderr: "" +Oct 27 14:33:14.166: INFO: stdout: "e2e-test-crd-publish-openapi-3510-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" +Oct 27 14:33:14.166: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-616 --namespace=crd-publish-openapi-616 delete e2e-test-crd-publish-openapi-3510-crds test-foo' +Oct 27 14:33:14.265: INFO: stderr: "" +Oct 27 14:33:14.265: INFO: stdout: "e2e-test-crd-publish-openapi-3510-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" +STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema +Oct 27 14:33:14.265: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-616 --namespace=crd-publish-openapi-616 create -f -' +Oct 27 14:33:14.428: INFO: rc: 1 +Oct 27 14:33:14.428: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-616 --namespace=crd-publish-openapi-616 apply -f -' +Oct 27 14:33:14.594: INFO: rc: 1 +STEP: client-side validation (kubectl create and apply) rejects request without required properties +Oct 27 14:33:14.594: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-616 --namespace=crd-publish-openapi-616 create -f -' +Oct 27 14:33:14.753: INFO: rc: 1 +Oct 27 14:33:14.753: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-616 --namespace=crd-publish-openapi-616 apply -f -' +Oct 27 14:33:14.927: INFO: rc: 1 +STEP: kubectl explain works to explain CR properties +Oct 27 14:33:14.927: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-616 explain e2e-test-crd-publish-openapi-3510-crds' +Oct 27 14:33:15.100: INFO: stderr: "" +Oct 27 14:33:15.100: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-3510-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" +STEP: kubectl explain works to explain CR properties recursively +Oct 27 14:33:15.101: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-616 explain e2e-test-crd-publish-openapi-3510-crds.metadata' +Oct 27 14:33:15.270: INFO: stderr: "" +Oct 27 14:33:15.270: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-3510-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n NOT return a 409 - instead, it will either return 201 Created or 500 with\n Reason ServerTimeout indicating a unique name could not be found in the\n time allotted, and the client should retry (optionally after the time\n indicated in the Retry-After header).\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only.\n\n DEPRECATED Kubernetes will stop propagating this field in 1.20 release and\n the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" +Oct 27 14:33:15.271: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-616 explain e2e-test-crd-publish-openapi-3510-crds.spec' +Oct 27 14:33:15.437: INFO: stderr: "" +Oct 27 14:33:15.437: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-3510-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" +Oct 27 14:33:15.437: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-616 explain e2e-test-crd-publish-openapi-3510-crds.spec.bars' +Oct 27 14:33:15.632: INFO: stderr: "" +Oct 27 14:33:15.632: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-3510-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" +STEP: kubectl explain works to return error when explain is called on property that doesn't exist +Oct 27 14:33:15.632: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-616 explain e2e-test-crd-publish-openapi-3510-crds.spec.bars2' +Oct 27 14:33:15.829: INFO: rc: 1 +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:33:19.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-616" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":346,"completed":113,"skipped":2144,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should mutate configmap [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:33:20.022: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-2594 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 27 14:33:20.844: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942000, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942000, loc:(*time.Location)(0xa09bc80)}}, Reason:"NewReplicaSetCreated", Message:"Created new replica set \"sample-webhook-deployment-78988fc6cd\""}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942000, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942000, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} +Oct 27 14:33:22.857: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942000, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942000, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942000, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942000, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 27 14:33:25.876: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should mutate configmap [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Registering the mutating configmap webhook via the AdmissionRegistration API +STEP: create a configmap that should be updated by the webhook +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:33:26.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-2594" for this suite. +STEP: Destroying namespace "webhook-2594-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":346,"completed":114,"skipped":2175,"failed":0} +SSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:33:26.234: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-9834 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0644 on tmpfs +Oct 27 14:33:26.450: INFO: Waiting up to 5m0s for pod "pod-e0497b9b-d230-4c1a-a76b-557417e988af" in namespace "emptydir-9834" to be "Succeeded or Failed" +Oct 27 14:33:26.462: INFO: Pod "pod-e0497b9b-d230-4c1a-a76b-557417e988af": Phase="Pending", Reason="", readiness=false. Elapsed: 11.369841ms +Oct 27 14:33:28.475: INFO: Pod "pod-e0497b9b-d230-4c1a-a76b-557417e988af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024129215s +Oct 27 14:33:30.488: INFO: Pod "pod-e0497b9b-d230-4c1a-a76b-557417e988af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037287387s +STEP: Saw pod success +Oct 27 14:33:30.488: INFO: Pod "pod-e0497b9b-d230-4c1a-a76b-557417e988af" satisfied condition "Succeeded or Failed" +Oct 27 14:33:30.499: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod pod-e0497b9b-d230-4c1a-a76b-557417e988af container test-container: +STEP: delete the pod +Oct 27 14:33:30.545: INFO: Waiting for pod pod-e0497b9b-d230-4c1a-a76b-557417e988af to disappear +Oct 27 14:33:30.556: INFO: Pod pod-e0497b9b-d230-4c1a-a76b-557417e988af no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:33:30.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-9834" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":115,"skipped":2179,"failed":0} + +------------------------------ +[sig-auth] ServiceAccounts + should run through the lifecycle of a ServiceAccount [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:33:30.589: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename svcaccounts +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in svcaccounts-3599 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should run through the lifecycle of a ServiceAccount [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a ServiceAccount +STEP: watching for the ServiceAccount to be added +STEP: patching the ServiceAccount +STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) +STEP: deleting the ServiceAccount +[AfterEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:33:30.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svcaccounts-3599" for this suite. +•{"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":346,"completed":116,"skipped":2179,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide container's cpu limit [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:33:30.878: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-7413 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should provide container's cpu limit [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 27 14:33:31.109: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1e93a319-fc2e-45ab-a33f-6e918bf28d3e" in namespace "projected-7413" to be "Succeeded or Failed" +Oct 27 14:33:31.120: INFO: Pod "downwardapi-volume-1e93a319-fc2e-45ab-a33f-6e918bf28d3e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.604793ms +Oct 27 14:33:33.133: INFO: Pod "downwardapi-volume-1e93a319-fc2e-45ab-a33f-6e918bf28d3e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023162745s +Oct 27 14:33:35.147: INFO: Pod "downwardapi-volume-1e93a319-fc2e-45ab-a33f-6e918bf28d3e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037118697s +STEP: Saw pod success +Oct 27 14:33:35.147: INFO: Pod "downwardapi-volume-1e93a319-fc2e-45ab-a33f-6e918bf28d3e" satisfied condition "Succeeded or Failed" +Oct 27 14:33:35.158: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod downwardapi-volume-1e93a319-fc2e-45ab-a33f-6e918bf28d3e container client-container: +STEP: delete the pod +Oct 27 14:33:35.193: INFO: Waiting for pod downwardapi-volume-1e93a319-fc2e-45ab-a33f-6e918bf28d3e to disappear +Oct 27 14:33:35.205: INFO: Pod downwardapi-volume-1e93a319-fc2e-45ab-a33f-6e918bf28d3e no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:33:35.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-7413" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":346,"completed":117,"skipped":2248,"failed":0} +S +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:33:35.237: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-5414 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name configmap-test-volume-baf4e637-f667-48b0-bb70-5c4d432b2f23 +STEP: Creating a pod to test consume configMaps +Oct 27 14:33:35.461: INFO: Waiting up to 5m0s for pod "pod-configmaps-637b07ff-aaf1-436b-9c35-80b0318ab9e0" in namespace "configmap-5414" to be "Succeeded or Failed" +Oct 27 14:33:35.472: INFO: Pod "pod-configmaps-637b07ff-aaf1-436b-9c35-80b0318ab9e0": Phase="Pending", Reason="", readiness=false. Elapsed: 10.822668ms +Oct 27 14:33:37.484: INFO: Pod "pod-configmaps-637b07ff-aaf1-436b-9c35-80b0318ab9e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.022789412s +STEP: Saw pod success +Oct 27 14:33:37.484: INFO: Pod "pod-configmaps-637b07ff-aaf1-436b-9c35-80b0318ab9e0" satisfied condition "Succeeded or Failed" +Oct 27 14:33:37.496: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod pod-configmaps-637b07ff-aaf1-436b-9c35-80b0318ab9e0 container agnhost-container: +STEP: delete the pod +Oct 27 14:33:37.571: INFO: Waiting for pod pod-configmaps-637b07ff-aaf1-436b-9c35-80b0318ab9e0 to disappear +Oct 27 14:33:37.582: INFO: Pod pod-configmaps-637b07ff-aaf1-436b-9c35-80b0318ab9e0 no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:33:37.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-5414" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":346,"completed":118,"skipped":2249,"failed":0} +SSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:33:37.615: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-5885 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0666 on tmpfs +Oct 27 14:33:37.825: INFO: Waiting up to 5m0s for pod "pod-b83adc8c-cbf2-44f9-9945-11358b8a9d8b" in namespace "emptydir-5885" to be "Succeeded or Failed" +Oct 27 14:33:37.836: INFO: Pod "pod-b83adc8c-cbf2-44f9-9945-11358b8a9d8b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.819229ms +Oct 27 14:33:39.848: INFO: Pod "pod-b83adc8c-cbf2-44f9-9945-11358b8a9d8b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.022748616s +STEP: Saw pod success +Oct 27 14:33:39.848: INFO: Pod "pod-b83adc8c-cbf2-44f9-9945-11358b8a9d8b" satisfied condition "Succeeded or Failed" +Oct 27 14:33:39.859: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod pod-b83adc8c-cbf2-44f9-9945-11358b8a9d8b container test-container: +STEP: delete the pod +Oct 27 14:33:39.927: INFO: Waiting for pod pod-b83adc8c-cbf2-44f9-9945-11358b8a9d8b to disappear +Oct 27 14:33:39.938: INFO: Pod pod-b83adc8c-cbf2-44f9-9945-11358b8a9d8b no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:33:39.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-5885" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":119,"skipped":2258,"failed":0} +SSSS +------------------------------ +[sig-storage] Downward API volume + should provide container's memory limit [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:33:39.973: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-1064 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should provide container's memory limit [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 27 14:33:40.187: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c4f36a33-9c6e-4bbc-9520-fa10dd0e79be" in namespace "downward-api-1064" to be "Succeeded or Failed" +Oct 27 14:33:40.198: INFO: Pod "downwardapi-volume-c4f36a33-9c6e-4bbc-9520-fa10dd0e79be": Phase="Pending", Reason="", readiness=false. Elapsed: 11.136012ms +Oct 27 14:33:42.212: INFO: Pod "downwardapi-volume-c4f36a33-9c6e-4bbc-9520-fa10dd0e79be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.02504466s +STEP: Saw pod success +Oct 27 14:33:42.212: INFO: Pod "downwardapi-volume-c4f36a33-9c6e-4bbc-9520-fa10dd0e79be" satisfied condition "Succeeded or Failed" +Oct 27 14:33:42.225: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod downwardapi-volume-c4f36a33-9c6e-4bbc-9520-fa10dd0e79be container client-container: +STEP: delete the pod +Oct 27 14:33:42.304: INFO: Waiting for pod downwardapi-volume-c4f36a33-9c6e-4bbc-9520-fa10dd0e79be to disappear +Oct 27 14:33:42.316: INFO: Pod downwardapi-volume-c4f36a33-9c6e-4bbc-9520-fa10dd0e79be no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:33:42.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-1064" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":346,"completed":120,"skipped":2262,"failed":0} +S +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:33:42.352: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-646 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name projected-configmap-test-volume-map-815be90b-81fa-4e8a-9d26-3785ccf46d0e +STEP: Creating a pod to test consume configMaps +Oct 27 14:33:42.659: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-775ede9c-6c16-416f-a5bc-323a586fb23e" in namespace "projected-646" to be "Succeeded or Failed" +Oct 27 14:33:42.674: INFO: Pod "pod-projected-configmaps-775ede9c-6c16-416f-a5bc-323a586fb23e": Phase="Pending", Reason="", readiness=false. Elapsed: 15.462496ms +Oct 27 14:33:44.687: INFO: Pod "pod-projected-configmaps-775ede9c-6c16-416f-a5bc-323a586fb23e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027853449s +Oct 27 14:33:46.700: INFO: Pod "pod-projected-configmaps-775ede9c-6c16-416f-a5bc-323a586fb23e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041398699s +STEP: Saw pod success +Oct 27 14:33:46.700: INFO: Pod "pod-projected-configmaps-775ede9c-6c16-416f-a5bc-323a586fb23e" satisfied condition "Succeeded or Failed" +Oct 27 14:33:46.712: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod pod-projected-configmaps-775ede9c-6c16-416f-a5bc-323a586fb23e container agnhost-container: +STEP: delete the pod +Oct 27 14:33:46.750: INFO: Waiting for pod pod-projected-configmaps-775ede9c-6c16-416f-a5bc-323a586fb23e to disappear +Oct 27 14:33:46.761: INFO: Pod pod-projected-configmaps-775ede9c-6c16-416f-a5bc-323a586fb23e no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:33:46.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-646" for this suite. +•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":121,"skipped":2263,"failed":0} +SSS +------------------------------ +[sig-apps] ReplicationController + should test the lifecycle of a ReplicationController [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:33:46.794: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename replication-controller +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replication-controller-60 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 +[It] should test the lifecycle of a ReplicationController [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a ReplicationController +STEP: waiting for RC to be added +STEP: waiting for available Replicas +STEP: patching ReplicationController +STEP: waiting for RC to be modified +STEP: patching ReplicationController status +STEP: waiting for RC to be modified +STEP: waiting for available Replicas +STEP: fetching ReplicationController status +STEP: patching ReplicationController scale +STEP: waiting for RC to be modified +STEP: waiting for ReplicationController's scale to be the max amount +STEP: fetching ReplicationController; ensuring that it's patched +STEP: updating ReplicationController status +STEP: waiting for RC to be modified +STEP: listing all ReplicationControllers +STEP: checking that ReplicationController has expected values +STEP: deleting ReplicationControllers by collection +STEP: waiting for ReplicationController to have a DELETED watchEvent +[AfterEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:33:51.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replication-controller-60" for this suite. +•{"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","total":346,"completed":122,"skipped":2266,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:33:51.415: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-8445 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name configmap-test-volume-84d4fa07-92df-4985-9e24-71df102375e5 +STEP: Creating a pod to test consume configMaps +Oct 27 14:33:51.641: INFO: Waiting up to 5m0s for pod "pod-configmaps-bfff75cc-fee5-4c9a-b5c9-3b78f42aab5e" in namespace "configmap-8445" to be "Succeeded or Failed" +Oct 27 14:33:51.652: INFO: Pod "pod-configmaps-bfff75cc-fee5-4c9a-b5c9-3b78f42aab5e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.939544ms +Oct 27 14:33:53.663: INFO: Pod "pod-configmaps-bfff75cc-fee5-4c9a-b5c9-3b78f42aab5e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.022692201s +STEP: Saw pod success +Oct 27 14:33:53.663: INFO: Pod "pod-configmaps-bfff75cc-fee5-4c9a-b5c9-3b78f42aab5e" satisfied condition "Succeeded or Failed" +Oct 27 14:33:53.675: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod pod-configmaps-bfff75cc-fee5-4c9a-b5c9-3b78f42aab5e container agnhost-container: +STEP: delete the pod +Oct 27 14:33:53.714: INFO: Waiting for pod pod-configmaps-bfff75cc-fee5-4c9a-b5c9-3b78f42aab5e to disappear +Oct 27 14:33:53.724: INFO: Pod pod-configmaps-bfff75cc-fee5-4c9a-b5c9-3b78f42aab5e no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:33:53.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-8445" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":123,"skipped":2305,"failed":0} +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] EndpointSlice + should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:33:53.758: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename endpointslice +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in endpointslice-3308 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 +[It] should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: referencing a single matching pod +STEP: referencing matching pods with named port +STEP: creating empty Endpoints and EndpointSlices for no matching Pods +STEP: recreating EndpointSlices after they've been deleted +Oct 27 14:34:14.202: INFO: EndpointSlice for Service endpointslice-3308/example-named-port not found +[AfterEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:34:24.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "endpointslice-3308" for this suite. +•{"msg":"PASSED [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","total":346,"completed":124,"skipped":2323,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should retry creating failed daemon pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:34:24.265: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename daemonsets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in daemonsets-3990 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:142 +[It] should retry creating failed daemon pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a simple DaemonSet "daemon-set" +STEP: Check that daemon pods launch on every node of the cluster. +Oct 27 14:34:24.540: INFO: Number of nodes with available pods: 0 +Oct 27 14:34:24.541: INFO: Node shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5 is running more than one daemon pod +Oct 27 14:34:25.574: INFO: Number of nodes with available pods: 0 +Oct 27 14:34:25.574: INFO: Node shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5 is running more than one daemon pod +Oct 27 14:34:26.574: INFO: Number of nodes with available pods: 1 +Oct 27 14:34:26.574: INFO: Node shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5 is running more than one daemon pod +Oct 27 14:34:27.573: INFO: Number of nodes with available pods: 2 +Oct 27 14:34:27.574: INFO: Number of running nodes: 2, number of available pods: 2 +STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. +Oct 27 14:34:27.634: INFO: Number of nodes with available pods: 1 +Oct 27 14:34:27.634: INFO: Node shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5 is running more than one daemon pod +Oct 27 14:34:28.666: INFO: Number of nodes with available pods: 1 +Oct 27 14:34:28.666: INFO: Node shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5 is running more than one daemon pod +Oct 27 14:34:29.667: INFO: Number of nodes with available pods: 2 +Oct 27 14:34:29.667: INFO: Number of running nodes: 2, number of available pods: 2 +STEP: Wait for the failed daemon pod to be completely deleted. +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:108 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3990, will wait for the garbage collector to delete the pods +Oct 27 14:34:29.763: INFO: Deleting DaemonSet.extensions daemon-set took: 12.876593ms +Oct 27 14:34:29.864: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.361341ms +Oct 27 14:34:32.175: INFO: Number of nodes with available pods: 0 +Oct 27 14:34:32.175: INFO: Number of running nodes: 0, number of available pods: 0 +Oct 27 14:34:32.187: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"18626"},"items":null} + +Oct 27 14:34:32.198: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"18626"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:34:32.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-3990" for this suite. +•{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":346,"completed":125,"skipped":2339,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Job + should adopt matching orphans and release non-matching pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Job + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:34:32.269: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename job +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in job-7660 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should adopt matching orphans and release non-matching pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a job +STEP: Ensuring active pods == parallelism +STEP: Orphaning one of the Job's Pods +Oct 27 14:34:37.034: INFO: Successfully updated pod "adopt-release--1-2vdjl" +STEP: Checking that the Job readopts the Pod +Oct 27 14:34:37.034: INFO: Waiting up to 15m0s for pod "adopt-release--1-2vdjl" in namespace "job-7660" to be "adopted" +Oct 27 14:34:37.048: INFO: Pod "adopt-release--1-2vdjl": Phase="Running", Reason="", readiness=true. Elapsed: 13.83767ms +Oct 27 14:34:37.048: INFO: Pod "adopt-release--1-2vdjl" satisfied condition "adopted" +STEP: Removing the labels from the Job's Pod +Oct 27 14:34:37.576: INFO: Successfully updated pod "adopt-release--1-2vdjl" +STEP: Checking that the Job releases the Pod +Oct 27 14:34:37.576: INFO: Waiting up to 15m0s for pod "adopt-release--1-2vdjl" in namespace "job-7660" to be "released" +Oct 27 14:34:37.586: INFO: Pod "adopt-release--1-2vdjl": Phase="Running", Reason="", readiness=true. Elapsed: 10.692612ms +Oct 27 14:34:37.586: INFO: Pod "adopt-release--1-2vdjl" satisfied condition "released" +[AfterEach] [sig-apps] Job + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:34:37.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "job-7660" for this suite. +•{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":346,"completed":126,"skipped":2379,"failed":0} + +------------------------------ +[sig-storage] Projected configMap + updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:34:37.658: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-9429 +STEP: Waiting for a default service account to be provisioned in namespace +[It] updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating projection with configMap that has name projected-configmap-test-upd-d0a7c9ba-8df3-4914-9a43-9ce231a304ab +STEP: Creating the pod +Oct 27 14:34:37.958: INFO: The status of Pod pod-projected-configmaps-c8263613-8257-4934-bc13-389d7b0129fe is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:34:39.972: INFO: The status of Pod pod-projected-configmaps-c8263613-8257-4934-bc13-389d7b0129fe is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:34:41.971: INFO: The status of Pod pod-projected-configmaps-c8263613-8257-4934-bc13-389d7b0129fe is Running (Ready = true) +STEP: Updating configmap projected-configmap-test-upd-d0a7c9ba-8df3-4914-9a43-9ce231a304ab +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:35:50.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-9429" for this suite. +•{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":127,"skipped":2379,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Proxy server + should support --unix-socket=/path [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:35:50.948: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-2715 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should support --unix-socket=/path [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Starting the proxy +Oct 27 14:35:51.153: INFO: Asynchronously running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-2715 proxy --unix-socket=/tmp/kubectl-proxy-unix333433974/test' +STEP: retrieving proxy /api/ output +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:35:51.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-2715" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":346,"completed":128,"skipped":2390,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:35:51.222: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename gc +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-3642 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the deployment +STEP: Wait for the Deployment to create new ReplicaSet +STEP: delete the deployment +STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs +STEP: Gathering metrics +Oct 27 14:35:52.216: INFO: For apiserver_request_total: +For apiserver_request_latency_seconds: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +W1027 14:35:52.216867 5683 metrics_grabber.go:151] Can't find kube-controller-manager pod. Grabbing metrics from kube-controller-manager is disabled. +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:35:52.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-3642" for this suite. +•{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":346,"completed":129,"skipped":2402,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:35:52.243: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-8038 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating secret with name secret-test-map-c11ee5a0-1a69-453d-830e-8e35db332aa4 +STEP: Creating a pod to test consume secrets +Oct 27 14:35:52.472: INFO: Waiting up to 5m0s for pod "pod-secrets-c8b68e20-37cb-4c13-9861-046d6d083c92" in namespace "secrets-8038" to be "Succeeded or Failed" +Oct 27 14:35:52.486: INFO: Pod "pod-secrets-c8b68e20-37cb-4c13-9861-046d6d083c92": Phase="Pending", Reason="", readiness=false. Elapsed: 14.35807ms +Oct 27 14:35:54.497: INFO: Pod "pod-secrets-c8b68e20-37cb-4c13-9861-046d6d083c92": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02559582s +Oct 27 14:35:56.510: INFO: Pod "pod-secrets-c8b68e20-37cb-4c13-9861-046d6d083c92": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038512986s +STEP: Saw pod success +Oct 27 14:35:56.510: INFO: Pod "pod-secrets-c8b68e20-37cb-4c13-9861-046d6d083c92" satisfied condition "Succeeded or Failed" +Oct 27 14:35:56.531: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod pod-secrets-c8b68e20-37cb-4c13-9861-046d6d083c92 container secret-volume-test: +STEP: delete the pod +Oct 27 14:35:56.570: INFO: Waiting for pod pod-secrets-c8b68e20-37cb-4c13-9861-046d6d083c92 to disappear +Oct 27 14:35:56.581: INFO: Pod pod-secrets-c8b68e20-37cb-4c13-9861-046d6d083c92 no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:35:56.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-8038" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":130,"skipped":2413,"failed":0} +S +------------------------------ +[sig-node] Docker Containers + should use the image defaults if command and args are blank [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Docker Containers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:35:56.632: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename containers +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in containers-1794 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-node] Docker Containers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:36:00.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "containers-1794" for this suite. +•{"msg":"PASSED [sig-node] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":346,"completed":131,"skipped":2414,"failed":0} +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should test the lifecycle of an Endpoint [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:36:00.924: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-8694 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should test the lifecycle of an Endpoint [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating an Endpoint +STEP: waiting for available Endpoint +STEP: listing all Endpoints +STEP: updating the Endpoint +STEP: fetching the Endpoint +STEP: patching the Endpoint +STEP: fetching the Endpoint +STEP: deleting the Endpoint by Collection +STEP: waiting for Endpoint deletion +STEP: fetching the Endpoint +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:36:01.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-8694" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":346,"completed":132,"skipped":2432,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-node] Docker Containers + should be able to override the image's default command and arguments [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Docker Containers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:36:01.294: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename containers +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in containers-6274 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test override all +Oct 27 14:36:01.514: INFO: Waiting up to 5m0s for pod "client-containers-71d1cd8e-3d85-450b-a350-4579378586eb" in namespace "containers-6274" to be "Succeeded or Failed" +Oct 27 14:36:01.525: INFO: Pod "client-containers-71d1cd8e-3d85-450b-a350-4579378586eb": Phase="Pending", Reason="", readiness=false. Elapsed: 11.282435ms +Oct 27 14:36:03.538: INFO: Pod "client-containers-71d1cd8e-3d85-450b-a350-4579378586eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023422408s +Oct 27 14:36:05.552: INFO: Pod "client-containers-71d1cd8e-3d85-450b-a350-4579378586eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037516317s +STEP: Saw pod success +Oct 27 14:36:05.552: INFO: Pod "client-containers-71d1cd8e-3d85-450b-a350-4579378586eb" satisfied condition "Succeeded or Failed" +Oct 27 14:36:05.563: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod client-containers-71d1cd8e-3d85-450b-a350-4579378586eb container agnhost-container: +STEP: delete the pod +Oct 27 14:36:05.598: INFO: Waiting for pod client-containers-71d1cd8e-3d85-450b-a350-4579378586eb to disappear +Oct 27 14:36:05.628: INFO: Pod client-containers-71d1cd8e-3d85-450b-a350-4579378586eb no longer exists +[AfterEach] [sig-node] Docker Containers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:36:05.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "containers-6274" for this suite. +•{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":346,"completed":133,"skipped":2443,"failed":0} +SSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl version + should check is all data is printed [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:36:05.662: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-6033 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should check is all data is printed [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:36:05.851: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-6033 version' +Oct 27 14:36:05.947: INFO: stderr: "" +Oct 27 14:36:05.947: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"22\", GitVersion:\"v1.22.2\", GitCommit:\"8b5a19147530eaac9476b0ab82980b4088bbc1b2\", GitTreeState:\"clean\", BuildDate:\"2021-09-15T21:38:50Z\", GoVersion:\"go1.16.8\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"22\", GitVersion:\"v1.22.2\", GitCommit:\"8b5a19147530eaac9476b0ab82980b4088bbc1b2\", GitTreeState:\"clean\", BuildDate:\"2021-09-15T21:32:41Z\", GoVersion:\"go1.16.8\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:36:05.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-6033" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":346,"completed":134,"skipped":2452,"failed":0} +SSSSSSSS +------------------------------ +[sig-apps] Deployment + should run the lifecycle of a Deployment [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:36:05.973: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename deployment +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in deployment-1142 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 +[It] should run the lifecycle of a Deployment [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a Deployment +STEP: waiting for Deployment to be created +STEP: waiting for all Replicas to be Ready +Oct 27 14:36:06.252: INFO: observed Deployment test-deployment in namespace deployment-1142 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Oct 27 14:36:06.253: INFO: observed Deployment test-deployment in namespace deployment-1142 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Oct 27 14:36:06.253: INFO: observed Deployment test-deployment in namespace deployment-1142 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Oct 27 14:36:06.253: INFO: observed Deployment test-deployment in namespace deployment-1142 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Oct 27 14:36:06.253: INFO: observed Deployment test-deployment in namespace deployment-1142 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Oct 27 14:36:06.253: INFO: observed Deployment test-deployment in namespace deployment-1142 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Oct 27 14:36:06.336: INFO: observed Deployment test-deployment in namespace deployment-1142 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Oct 27 14:36:06.336: INFO: observed Deployment test-deployment in namespace deployment-1142 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Oct 27 14:36:08.309: INFO: observed Deployment test-deployment in namespace deployment-1142 with ReadyReplicas 1 and labels map[test-deployment-static:true] +Oct 27 14:36:08.309: INFO: observed Deployment test-deployment in namespace deployment-1142 with ReadyReplicas 1 and labels map[test-deployment-static:true] +Oct 27 14:36:08.376: INFO: observed Deployment test-deployment in namespace deployment-1142 with ReadyReplicas 2 and labels map[test-deployment-static:true] +STEP: patching the Deployment +Oct 27 14:36:08.399: INFO: observed event type ADDED +STEP: waiting for Replicas to scale +Oct 27 14:36:08.429: INFO: observed Deployment test-deployment in namespace deployment-1142 with ReadyReplicas 0 +Oct 27 14:36:08.429: INFO: observed Deployment test-deployment in namespace deployment-1142 with ReadyReplicas 0 +Oct 27 14:36:08.429: INFO: observed Deployment test-deployment in namespace deployment-1142 with ReadyReplicas 0 +Oct 27 14:36:08.429: INFO: observed Deployment test-deployment in namespace deployment-1142 with ReadyReplicas 0 +Oct 27 14:36:08.429: INFO: observed Deployment test-deployment in namespace deployment-1142 with ReadyReplicas 0 +Oct 27 14:36:08.430: INFO: observed Deployment test-deployment in namespace deployment-1142 with ReadyReplicas 0 +Oct 27 14:36:08.430: INFO: observed Deployment test-deployment in namespace deployment-1142 with ReadyReplicas 0 +Oct 27 14:36:08.430: INFO: observed Deployment test-deployment in namespace deployment-1142 with ReadyReplicas 0 +Oct 27 14:36:08.430: INFO: observed Deployment test-deployment in namespace deployment-1142 with ReadyReplicas 1 +Oct 27 14:36:08.430: INFO: observed Deployment test-deployment in namespace deployment-1142 with ReadyReplicas 1 +Oct 27 14:36:08.430: INFO: observed Deployment test-deployment in namespace deployment-1142 with ReadyReplicas 2 +Oct 27 14:36:08.430: INFO: observed Deployment test-deployment in namespace deployment-1142 with ReadyReplicas 2 +Oct 27 14:36:08.430: INFO: observed Deployment test-deployment in namespace deployment-1142 with ReadyReplicas 2 +Oct 27 14:36:08.430: INFO: observed Deployment test-deployment in namespace deployment-1142 with ReadyReplicas 2 +Oct 27 14:36:08.430: INFO: observed Deployment test-deployment in namespace deployment-1142 with ReadyReplicas 2 +Oct 27 14:36:08.430: INFO: observed Deployment test-deployment in namespace deployment-1142 with ReadyReplicas 2 +Oct 27 14:36:08.432: INFO: observed Deployment test-deployment in namespace deployment-1142 with ReadyReplicas 2 +Oct 27 14:36:08.432: INFO: observed Deployment test-deployment in namespace deployment-1142 with ReadyReplicas 2 +Oct 27 14:36:08.436: INFO: observed Deployment test-deployment in namespace deployment-1142 with ReadyReplicas 1 +Oct 27 14:36:08.436: INFO: observed Deployment test-deployment in namespace deployment-1142 with ReadyReplicas 1 +Oct 27 14:36:08.456: INFO: observed Deployment test-deployment in namespace deployment-1142 with ReadyReplicas 1 +Oct 27 14:36:08.456: INFO: observed Deployment test-deployment in namespace deployment-1142 with ReadyReplicas 1 +Oct 27 14:36:10.460: INFO: observed Deployment test-deployment in namespace deployment-1142 with ReadyReplicas 2 +Oct 27 14:36:10.460: INFO: observed Deployment test-deployment in namespace deployment-1142 with ReadyReplicas 2 +Oct 27 14:36:10.481: INFO: observed Deployment test-deployment in namespace deployment-1142 with ReadyReplicas 1 +STEP: listing Deployments +Oct 27 14:36:10.495: INFO: Found test-deployment with labels: map[test-deployment:patched test-deployment-static:true] +STEP: updating the Deployment +Oct 27 14:36:10.529: INFO: observed Deployment test-deployment in namespace deployment-1142 with ReadyReplicas 1 +STEP: fetching the DeploymentStatus +Oct 27 14:36:10.552: INFO: observed Deployment test-deployment in namespace deployment-1142 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] +Oct 27 14:36:10.552: INFO: observed Deployment test-deployment in namespace deployment-1142 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] +Oct 27 14:36:10.552: INFO: observed Deployment test-deployment in namespace deployment-1142 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] +Oct 27 14:36:10.552: INFO: observed Deployment test-deployment in namespace deployment-1142 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] +Oct 27 14:36:10.644: INFO: observed Deployment test-deployment in namespace deployment-1142 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] +Oct 27 14:36:12.536: INFO: observed Deployment test-deployment in namespace deployment-1142 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] +Oct 27 14:36:12.647: INFO: observed Deployment test-deployment in namespace deployment-1142 with ReadyReplicas 3 and labels map[test-deployment:updated test-deployment-static:true] +Oct 27 14:36:12.661: INFO: observed Deployment test-deployment in namespace deployment-1142 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] +Oct 27 14:36:12.738: INFO: observed Deployment test-deployment in namespace deployment-1142 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] +Oct 27 14:36:14.617: INFO: observed Deployment test-deployment in namespace deployment-1142 with ReadyReplicas 3 and labels map[test-deployment:updated test-deployment-static:true] +STEP: patching the DeploymentStatus +STEP: fetching the DeploymentStatus +Oct 27 14:36:14.684: INFO: observed Deployment test-deployment in namespace deployment-1142 with ReadyReplicas 1 +Oct 27 14:36:14.684: INFO: observed Deployment test-deployment in namespace deployment-1142 with ReadyReplicas 1 +Oct 27 14:36:14.684: INFO: observed Deployment test-deployment in namespace deployment-1142 with ReadyReplicas 1 +Oct 27 14:36:14.684: INFO: observed Deployment test-deployment in namespace deployment-1142 with ReadyReplicas 1 +Oct 27 14:36:14.684: INFO: observed Deployment test-deployment in namespace deployment-1142 with ReadyReplicas 1 +Oct 27 14:36:14.684: INFO: observed Deployment test-deployment in namespace deployment-1142 with ReadyReplicas 2 +Oct 27 14:36:14.684: INFO: observed Deployment test-deployment in namespace deployment-1142 with ReadyReplicas 3 +Oct 27 14:36:14.684: INFO: observed Deployment test-deployment in namespace deployment-1142 with ReadyReplicas 2 +Oct 27 14:36:14.684: INFO: observed Deployment test-deployment in namespace deployment-1142 with ReadyReplicas 2 +Oct 27 14:36:14.684: INFO: observed Deployment test-deployment in namespace deployment-1142 with ReadyReplicas 3 +STEP: deleting the Deployment +Oct 27 14:36:14.709: INFO: observed event type MODIFIED +Oct 27 14:36:14.710: INFO: observed event type MODIFIED +Oct 27 14:36:14.710: INFO: observed event type MODIFIED +Oct 27 14:36:14.710: INFO: observed event type MODIFIED +Oct 27 14:36:14.710: INFO: observed event type MODIFIED +Oct 27 14:36:14.710: INFO: observed event type MODIFIED +Oct 27 14:36:14.710: INFO: observed event type MODIFIED +Oct 27 14:36:14.710: INFO: observed event type MODIFIED +Oct 27 14:36:14.710: INFO: observed event type MODIFIED +Oct 27 14:36:14.710: INFO: observed event type MODIFIED +Oct 27 14:36:14.710: INFO: observed event type MODIFIED +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 +Oct 27 14:36:14.728: INFO: Log out all the ReplicaSets if there is no deployment created +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:36:14.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-1142" for this suite. +•{"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":346,"completed":135,"skipped":2460,"failed":0} +SSS +------------------------------ +[sig-apps] Daemon set [Serial] + should update pod when spec was updated and update strategy is RollingUpdate [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:36:14.767: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename daemonsets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in daemonsets-1590 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:142 +[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:36:15.009: INFO: Creating simple daemon set daemon-set +STEP: Check that daemon pods launch on every node of the cluster. +Oct 27 14:36:15.049: INFO: Number of nodes with available pods: 0 +Oct 27 14:36:15.049: INFO: Node shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5 is running more than one daemon pod +Oct 27 14:36:16.085: INFO: Number of nodes with available pods: 0 +Oct 27 14:36:16.085: INFO: Node shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5 is running more than one daemon pod +Oct 27 14:36:17.085: INFO: Number of nodes with available pods: 0 +Oct 27 14:36:17.085: INFO: Node shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5 is running more than one daemon pod +Oct 27 14:36:18.084: INFO: Number of nodes with available pods: 2 +Oct 27 14:36:18.084: INFO: Number of running nodes: 2, number of available pods: 2 +STEP: Update daemon pods image. +STEP: Check that daemon pods images are updated. +Oct 27 14:36:18.175: INFO: Wrong image for pod: daemon-set-qmxxd. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. +Oct 27 14:36:19.200: INFO: Wrong image for pod: daemon-set-qmxxd. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. +Oct 27 14:36:20.200: INFO: Wrong image for pod: daemon-set-qmxxd. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. +Oct 27 14:36:21.236: INFO: Wrong image for pod: daemon-set-qmxxd. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. +Oct 27 14:36:21.237: INFO: Pod daemon-set-vxd7v is not available +Oct 27 14:36:22.200: INFO: Wrong image for pod: daemon-set-qmxxd. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. +Oct 27 14:36:22.200: INFO: Pod daemon-set-vxd7v is not available +Oct 27 14:36:24.200: INFO: Pod daemon-set-sqkrp is not available +STEP: Check that daemon pods are still running on every node of the cluster. +Oct 27 14:36:24.251: INFO: Number of nodes with available pods: 1 +Oct 27 14:36:24.251: INFO: Node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc is running more than one daemon pod +Oct 27 14:36:25.286: INFO: Number of nodes with available pods: 1 +Oct 27 14:36:25.286: INFO: Node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc is running more than one daemon pod +Oct 27 14:36:26.285: INFO: Number of nodes with available pods: 2 +Oct 27 14:36:26.285: INFO: Number of running nodes: 2, number of available pods: 2 +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:108 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1590, will wait for the garbage collector to delete the pods +Oct 27 14:36:26.420: INFO: Deleting DaemonSet.extensions daemon-set took: 13.543309ms +Oct 27 14:36:26.520: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.798739ms +Oct 27 14:36:28.733: INFO: Number of nodes with available pods: 0 +Oct 27 14:36:28.733: INFO: Number of running nodes: 0, number of available pods: 0 +Oct 27 14:36:28.745: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"19606"},"items":null} + +Oct 27 14:36:28.757: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"19606"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:36:28.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-1590" for this suite. +•{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":346,"completed":136,"skipped":2463,"failed":0} +SSSSSS +------------------------------ +[sig-node] NoExecuteTaintManager Single Pod [Serial] + removing taint cancels eviction [Disruptive] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:36:28.831: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename taint-single-pod +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in taint-single-pod-9088 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/taints.go:164 +Oct 27 14:36:29.025: INFO: Waiting up to 1m0s for all nodes to be ready +Oct 27 14:37:29.338: INFO: Waiting for terminating namespaces to be deleted... +[It] removing taint cancels eviction [Disruptive] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:37:29.349: INFO: Starting informer... +STEP: Starting pod... +Oct 27 14:37:29.387: INFO: Pod is running on shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc. Tainting Node +STEP: Trying to apply a taint on the Node +STEP: verifying the node has the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute +STEP: Waiting short time to make sure Pod is queued for deletion +Oct 27 14:37:29.448: INFO: Pod wasn't evicted. Proceeding +Oct 27 14:37:29.448: INFO: Removing taint from Node +STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute +STEP: Waiting some time to make sure that toleration time passed. +Oct 27 14:38:44.548: INFO: Pod wasn't evicted. Test successful +[AfterEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:38:44.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "taint-single-pod-9088" for this suite. +•{"msg":"PASSED [sig-node] NoExecuteTaintManager Single Pod [Serial] removing taint cancels eviction [Disruptive] [Conformance]","total":346,"completed":137,"skipped":2469,"failed":0} +SSSSSS +------------------------------ +[sig-instrumentation] Events + should ensure that an event can be fetched, patched, deleted, and listed [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-instrumentation] Events + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:38:44.589: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename events +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in events-4617 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a test event +STEP: listing all events in all namespaces +STEP: patching the test event +STEP: fetching the test event +STEP: deleting the test event +STEP: listing all events in all namespaces +[AfterEach] [sig-instrumentation] Events + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:38:44.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "events-4617" for this suite. +•{"msg":"PASSED [sig-instrumentation] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":346,"completed":138,"skipped":2475,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicaSet + Replicaset should have a working scale subresource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:38:44.918: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename replicaset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replicaset-8573 +STEP: Waiting for a default service account to be provisioned in namespace +[It] Replicaset should have a working scale subresource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating replica set "test-rs" that asks for more than the allowed pod quota +Oct 27 14:38:45.133: INFO: Pod name sample-pod: Found 0 pods out of 1 +Oct 27 14:38:50.145: INFO: Pod name sample-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running +STEP: getting scale subresource +STEP: updating a scale subresource +STEP: verifying the replicaset Spec.Replicas was modified +STEP: Patch a scale subresource +[AfterEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:38:50.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replicaset-8573" for this suite. +•{"msg":"PASSED [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]","total":346,"completed":139,"skipped":2504,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-node] Secrets + should be consumable from pods in env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:38:50.246: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-4847 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating secret with name secret-test-34f83012-a786-4180-a887-bbf460412958 +STEP: Creating a pod to test consume secrets +Oct 27 14:38:50.609: INFO: Waiting up to 5m0s for pod "pod-secrets-482bef52-de8b-4adf-9f24-c465845ae75b" in namespace "secrets-4847" to be "Succeeded or Failed" +Oct 27 14:38:50.621: INFO: Pod "pod-secrets-482bef52-de8b-4adf-9f24-c465845ae75b": Phase="Pending", Reason="", readiness=false. Elapsed: 11.447984ms +Oct 27 14:38:52.634: INFO: Pod "pod-secrets-482bef52-de8b-4adf-9f24-c465845ae75b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.02506138s +STEP: Saw pod success +Oct 27 14:38:52.634: INFO: Pod "pod-secrets-482bef52-de8b-4adf-9f24-c465845ae75b" satisfied condition "Succeeded or Failed" +Oct 27 14:38:52.646: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod pod-secrets-482bef52-de8b-4adf-9f24-c465845ae75b container secret-env-test: +STEP: delete the pod +Oct 27 14:38:52.710: INFO: Waiting for pod pod-secrets-482bef52-de8b-4adf-9f24-c465845ae75b to disappear +Oct 27 14:38:52.730: INFO: Pod pod-secrets-482bef52-de8b-4adf-9f24-c465845ae75b no longer exists +[AfterEach] [sig-node] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:38:52.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-4847" for this suite. +•{"msg":"PASSED [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":346,"completed":140,"skipped":2514,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + Should recreate evicted statefulset [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:38:52.764: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename statefulset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-6385 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:92 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:107 +STEP: Creating service test in namespace statefulset-6385 +[It] Should recreate evicted statefulset [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Looking for a node to schedule stateful set and pod +STEP: Creating pod with conflicting port in namespace statefulset-6385 +STEP: Waiting until pod test-pod will start running in namespace statefulset-6385 +STEP: Creating statefulset with conflicting port in namespace statefulset-6385 +STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-6385 +Oct 27 14:38:57.054: INFO: Observed stateful pod in namespace: statefulset-6385, name: ss-0, uid: b2902e47-5c99-4759-9d37-c5b681960eda, status phase: Pending. Waiting for statefulset controller to delete. +Oct 27 14:38:57.068: INFO: Observed stateful pod in namespace: statefulset-6385, name: ss-0, uid: b2902e47-5c99-4759-9d37-c5b681960eda, status phase: Failed. Waiting for statefulset controller to delete. +Oct 27 14:38:57.075: INFO: Observed stateful pod in namespace: statefulset-6385, name: ss-0, uid: b2902e47-5c99-4759-9d37-c5b681960eda, status phase: Failed. Waiting for statefulset controller to delete. +Oct 27 14:38:57.077: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-6385 +STEP: Removing pod with conflicting port in namespace statefulset-6385 +STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-6385 and will be in running state +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118 +Oct 27 14:39:01.169: INFO: Deleting all statefulset in ns statefulset-6385 +Oct 27 14:39:01.181: INFO: Scaling statefulset ss to 0 +Oct 27 14:39:11.235: INFO: Waiting for statefulset status.replicas updated to 0 +Oct 27 14:39:11.247: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:39:11.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-6385" for this suite. +•{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":346,"completed":141,"skipped":2529,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-auth] Certificates API [Privileged:ClusterAdmin] + should support CSR API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:39:11.321: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename certificates +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in certificates-454 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support CSR API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: getting /apis +STEP: getting /apis/certificates.k8s.io +STEP: getting /apis/certificates.k8s.io/v1 +STEP: creating +STEP: getting +STEP: listing +STEP: watching +Oct 27 14:39:12.825: INFO: starting watch +STEP: patching +STEP: updating +Oct 27 14:39:12.861: INFO: waiting for watch events with expected annotations +Oct 27 14:39:12.861: INFO: saw patched and updated annotations +STEP: getting /approval +STEP: patching /approval +STEP: updating /approval +STEP: getting /status +STEP: patching /status +STEP: updating /status +STEP: deleting +STEP: deleting a collection +[AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:39:13.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "certificates-454" for this suite. +•{"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":346,"completed":142,"skipped":2545,"failed":0} +SS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with projected pod [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:39:13.037: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename subpath +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in subpath-4779 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] Atomic writer volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 +STEP: Setting up data +[It] should support subpaths with projected pod [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating pod pod-subpath-test-projected-w4z8 +STEP: Creating a pod to test atomic-volume-subpath +Oct 27 14:39:13.273: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-w4z8" in namespace "subpath-4779" to be "Succeeded or Failed" +Oct 27 14:39:13.330: INFO: Pod "pod-subpath-test-projected-w4z8": Phase="Pending", Reason="", readiness=false. Elapsed: 57.004361ms +Oct 27 14:39:15.344: INFO: Pod "pod-subpath-test-projected-w4z8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070954135s +Oct 27 14:39:17.357: INFO: Pod "pod-subpath-test-projected-w4z8": Phase="Running", Reason="", readiness=true. Elapsed: 4.08367819s +Oct 27 14:39:19.369: INFO: Pod "pod-subpath-test-projected-w4z8": Phase="Running", Reason="", readiness=true. Elapsed: 6.095632716s +Oct 27 14:39:21.382: INFO: Pod "pod-subpath-test-projected-w4z8": Phase="Running", Reason="", readiness=true. Elapsed: 8.108511999s +Oct 27 14:39:23.395: INFO: Pod "pod-subpath-test-projected-w4z8": Phase="Running", Reason="", readiness=true. Elapsed: 10.121924577s +Oct 27 14:39:25.408: INFO: Pod "pod-subpath-test-projected-w4z8": Phase="Running", Reason="", readiness=true. Elapsed: 12.134551215s +Oct 27 14:39:27.420: INFO: Pod "pod-subpath-test-projected-w4z8": Phase="Running", Reason="", readiness=true. Elapsed: 14.147034943s +Oct 27 14:39:29.434: INFO: Pod "pod-subpath-test-projected-w4z8": Phase="Running", Reason="", readiness=true. Elapsed: 16.160208247s +Oct 27 14:39:31.447: INFO: Pod "pod-subpath-test-projected-w4z8": Phase="Running", Reason="", readiness=true. Elapsed: 18.174004376s +Oct 27 14:39:33.460: INFO: Pod "pod-subpath-test-projected-w4z8": Phase="Running", Reason="", readiness=true. Elapsed: 20.18661936s +Oct 27 14:39:35.474: INFO: Pod "pod-subpath-test-projected-w4z8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.200365166s +STEP: Saw pod success +Oct 27 14:39:35.474: INFO: Pod "pod-subpath-test-projected-w4z8" satisfied condition "Succeeded or Failed" +Oct 27 14:39:35.485: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod pod-subpath-test-projected-w4z8 container test-container-subpath-projected-w4z8: +STEP: delete the pod +Oct 27 14:39:35.528: INFO: Waiting for pod pod-subpath-test-projected-w4z8 to disappear +Oct 27 14:39:35.539: INFO: Pod pod-subpath-test-projected-w4z8 no longer exists +STEP: Deleting pod pod-subpath-test-projected-w4z8 +Oct 27 14:39:35.539: INFO: Deleting pod "pod-subpath-test-projected-w4z8" in namespace "subpath-4779" +[AfterEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:39:35.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "subpath-4779" for this suite. +•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":346,"completed":143,"skipped":2547,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir wrapper volumes + should not cause race condition when used for configmaps [Serial] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir wrapper volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:39:35.585: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir-wrapper +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-wrapper-2143 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not cause race condition when used for configmaps [Serial] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating 50 configmaps +STEP: Creating RC which spawns configmap-volume pods +Oct 27 14:39:36.399: INFO: Pod name wrapped-volume-race-b56680e9-3a20-4b80-b18c-01412efb2652: Found 0 pods out of 5 +Oct 27 14:39:41.431: INFO: Pod name wrapped-volume-race-b56680e9-3a20-4b80-b18c-01412efb2652: Found 5 pods out of 5 +STEP: Ensuring each pod is running +STEP: deleting ReplicationController wrapped-volume-race-b56680e9-3a20-4b80-b18c-01412efb2652 in namespace emptydir-wrapper-2143, will wait for the garbage collector to delete the pods +Oct 27 14:39:43.595: INFO: Deleting ReplicationController wrapped-volume-race-b56680e9-3a20-4b80-b18c-01412efb2652 took: 14.455099ms +Oct 27 14:39:43.695: INFO: Terminating ReplicationController wrapped-volume-race-b56680e9-3a20-4b80-b18c-01412efb2652 pods took: 100.296981ms +STEP: Creating RC which spawns configmap-volume pods +Oct 27 14:39:47.236: INFO: Pod name wrapped-volume-race-514c09d1-0116-468f-8b5d-9e9205d45828: Found 0 pods out of 5 +Oct 27 14:39:52.269: INFO: Pod name wrapped-volume-race-514c09d1-0116-468f-8b5d-9e9205d45828: Found 5 pods out of 5 +STEP: Ensuring each pod is running +STEP: deleting ReplicationController wrapped-volume-race-514c09d1-0116-468f-8b5d-9e9205d45828 in namespace emptydir-wrapper-2143, will wait for the garbage collector to delete the pods +Oct 27 14:39:54.463: INFO: Deleting ReplicationController wrapped-volume-race-514c09d1-0116-468f-8b5d-9e9205d45828 took: 13.865417ms +Oct 27 14:39:54.563: INFO: Terminating ReplicationController wrapped-volume-race-514c09d1-0116-468f-8b5d-9e9205d45828 pods took: 100.680671ms +STEP: Creating RC which spawns configmap-volume pods +Oct 27 14:39:58.207: INFO: Pod name wrapped-volume-race-9e0e45d3-e900-4491-98b3-fd81db1b08da: Found 0 pods out of 5 +Oct 27 14:40:03.238: INFO: Pod name wrapped-volume-race-9e0e45d3-e900-4491-98b3-fd81db1b08da: Found 5 pods out of 5 +STEP: Ensuring each pod is running +STEP: deleting ReplicationController wrapped-volume-race-9e0e45d3-e900-4491-98b3-fd81db1b08da in namespace emptydir-wrapper-2143, will wait for the garbage collector to delete the pods +Oct 27 14:40:05.401: INFO: Deleting ReplicationController wrapped-volume-race-9e0e45d3-e900-4491-98b3-fd81db1b08da took: 15.220214ms +Oct 27 14:40:05.502: INFO: Terminating ReplicationController wrapped-volume-race-9e0e45d3-e900-4491-98b3-fd81db1b08da pods took: 101.078307ms +STEP: Cleaning up the configMaps +[AfterEach] [sig-storage] EmptyDir wrapper volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:40:08.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-wrapper-2143" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":346,"completed":144,"skipped":2593,"failed":0} +S +------------------------------ +[sig-api-machinery] Garbage collector + should delete RS created by deployment when not orphaning [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:40:08.868: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename gc +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-4663 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should delete RS created by deployment when not orphaning [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the deployment +STEP: Wait for the Deployment to create new ReplicaSet +STEP: delete the deployment +STEP: wait for all rs to be garbage collected +STEP: Gathering metrics +Oct 27 14:40:09.176: INFO: For apiserver_request_total: +For apiserver_request_latency_seconds: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +W1027 14:40:09.176105 5683 metrics_grabber.go:151] Can't find kube-controller-manager pod. Grabbing metrics from kube-controller-manager is disabled. +Oct 27 14:40:09.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-4663" for this suite. +•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":346,"completed":145,"skipped":2594,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Pods + should be updated [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:40:09.201: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename pods +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-9509 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:188 +[It] should be updated [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pod +STEP: submitting the pod to kubernetes +Oct 27 14:40:09.431: INFO: The status of Pod pod-update-710264a7-ac6d-4e0f-836a-a666eade9855 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:40:11.444: INFO: The status of Pod pod-update-710264a7-ac6d-4e0f-836a-a666eade9855 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:40:13.445: INFO: The status of Pod pod-update-710264a7-ac6d-4e0f-836a-a666eade9855 is Running (Ready = true) +STEP: verifying the pod is in kubernetes +STEP: updating the pod +Oct 27 14:40:14.000: INFO: Successfully updated pod "pod-update-710264a7-ac6d-4e0f-836a-a666eade9855" +STEP: verifying the updated pod is in kubernetes +Oct 27 14:40:14.044: INFO: Pod update OK +[AfterEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:40:14.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-9509" for this suite. +•{"msg":"PASSED [sig-node] Pods should be updated [NodeConformance] [Conformance]","total":346,"completed":146,"skipped":2623,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] PodTemplates + should run the lifecycle of PodTemplates [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] PodTemplates + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:40:14.079: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename podtemplate +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in podtemplate-4955 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should run the lifecycle of PodTemplates [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-node] PodTemplates + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:40:14.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "podtemplate-4955" for this suite. +•{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":346,"completed":147,"skipped":2669,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:40:14.574: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-722 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating projection with secret that has name projected-secret-test-map-0d495ffd-a5d5-4a6c-b41e-13a5975c24fe +STEP: Creating a pod to test consume secrets +Oct 27 14:40:15.438: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9ed734e4-d29b-4802-a917-e8e935c9367c" in namespace "projected-722" to be "Succeeded or Failed" +Oct 27 14:40:15.450: INFO: Pod "pod-projected-secrets-9ed734e4-d29b-4802-a917-e8e935c9367c": Phase="Pending", Reason="", readiness=false. Elapsed: 11.552878ms +Oct 27 14:40:17.462: INFO: Pod "pod-projected-secrets-9ed734e4-d29b-4802-a917-e8e935c9367c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023938662s +Oct 27 14:40:19.474: INFO: Pod "pod-projected-secrets-9ed734e4-d29b-4802-a917-e8e935c9367c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03602087s +STEP: Saw pod success +Oct 27 14:40:19.474: INFO: Pod "pod-projected-secrets-9ed734e4-d29b-4802-a917-e8e935c9367c" satisfied condition "Succeeded or Failed" +Oct 27 14:40:19.486: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod pod-projected-secrets-9ed734e4-d29b-4802-a917-e8e935c9367c container projected-secret-volume-test: +STEP: delete the pod +Oct 27 14:40:19.529: INFO: Waiting for pod pod-projected-secrets-9ed734e4-d29b-4802-a917-e8e935c9367c to disappear +Oct 27 14:40:19.542: INFO: Pod pod-projected-secrets-9ed734e4-d29b-4802-a917-e8e935c9367c no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:40:19.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-722" for this suite. +•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":148,"skipped":2680,"failed":0} +S +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:40:19.575: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-7738 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0644 on tmpfs +Oct 27 14:40:19.783: INFO: Waiting up to 5m0s for pod "pod-32ccdcdf-bd32-4b8e-9f2e-1568f2ea6a84" in namespace "emptydir-7738" to be "Succeeded or Failed" +Oct 27 14:40:19.830: INFO: Pod "pod-32ccdcdf-bd32-4b8e-9f2e-1568f2ea6a84": Phase="Pending", Reason="", readiness=false. Elapsed: 47.300183ms +Oct 27 14:40:22.060: INFO: Pod "pod-32ccdcdf-bd32-4b8e-9f2e-1568f2ea6a84": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.277228264s +STEP: Saw pod success +Oct 27 14:40:22.060: INFO: Pod "pod-32ccdcdf-bd32-4b8e-9f2e-1568f2ea6a84" satisfied condition "Succeeded or Failed" +Oct 27 14:40:22.072: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod pod-32ccdcdf-bd32-4b8e-9f2e-1568f2ea6a84 container test-container: +STEP: delete the pod +Oct 27 14:40:22.146: INFO: Waiting for pod pod-32ccdcdf-bd32-4b8e-9f2e-1568f2ea6a84 to disappear +Oct 27 14:40:22.158: INFO: Pod pod-32ccdcdf-bd32-4b8e-9f2e-1568f2ea6a84 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:40:22.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-7738" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":149,"skipped":2681,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:40:22.191: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-6597 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0666 on node default medium +Oct 27 14:40:22.404: INFO: Waiting up to 5m0s for pod "pod-e5946644-d12f-4c5f-85e8-958886442dc5" in namespace "emptydir-6597" to be "Succeeded or Failed" +Oct 27 14:40:22.415: INFO: Pod "pod-e5946644-d12f-4c5f-85e8-958886442dc5": Phase="Pending", Reason="", readiness=false. Elapsed: 11.19148ms +Oct 27 14:40:24.427: INFO: Pod "pod-e5946644-d12f-4c5f-85e8-958886442dc5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.023273008s +STEP: Saw pod success +Oct 27 14:40:24.427: INFO: Pod "pod-e5946644-d12f-4c5f-85e8-958886442dc5" satisfied condition "Succeeded or Failed" +Oct 27 14:40:24.439: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod pod-e5946644-d12f-4c5f-85e8-958886442dc5 container test-container: +STEP: delete the pod +Oct 27 14:40:24.544: INFO: Waiting for pod pod-e5946644-d12f-4c5f-85e8-958886442dc5 to disappear +Oct 27 14:40:24.555: INFO: Pod pod-e5946644-d12f-4c5f-85e8-958886442dc5 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:40:24.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-6597" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":150,"skipped":2720,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-auth] ServiceAccounts + should guarantee kube-root-ca.crt exist in any namespace [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:40:24.635: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename svcaccounts +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in svcaccounts-862 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should guarantee kube-root-ca.crt exist in any namespace [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:40:24.930: INFO: Got root ca configmap in namespace "svcaccounts-862" +Oct 27 14:40:24.943: INFO: Deleted root ca configmap in namespace "svcaccounts-862" +STEP: waiting for a new root ca configmap created +Oct 27 14:40:25.455: INFO: Recreated root ca configmap in namespace "svcaccounts-862" +Oct 27 14:40:25.468: INFO: Updated root ca configmap in namespace "svcaccounts-862" +STEP: waiting for the root ca configmap reconciled +Oct 27 14:40:25.980: INFO: Reconciled root ca configmap in namespace "svcaccounts-862" +[AfterEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:40:25.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svcaccounts-862" for this suite. +•{"msg":"PASSED [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]","total":346,"completed":151,"skipped":2753,"failed":0} +SSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should unconditionally reject operations on fail closed webhook [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:40:26.014: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-3108 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 27 14:40:26.550: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942426, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942426, loc:(*time.Location)(0xa09bc80)}}, Reason:"NewReplicaSetCreated", Message:"Created new replica set \"sample-webhook-deployment-78988fc6cd\""}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942426, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942426, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} +Oct 27 14:40:28.563: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942426, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942426, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942426, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942426, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 27 14:40:31.585: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should unconditionally reject operations on fail closed webhook [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API +STEP: create a namespace for the webhook +STEP: create a configmap should be unconditionally rejected by the webhook +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:40:32.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-3108" for this suite. +STEP: Destroying namespace "webhook-3108-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":346,"completed":152,"skipped":2759,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-apps] ReplicationController + should surface a failure condition on a common issue like exceeded quota [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:40:32.177: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename replication-controller +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replication-controller-9172 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 +[It] should surface a failure condition on a common issue like exceeded quota [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:40:32.410: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace +STEP: Creating rc "condition-test" that asks for more than the allowed pod quota +STEP: Checking rc "condition-test" has the desired failure condition set +STEP: Scaling down rc "condition-test" to satisfy pod quota +Oct 27 14:40:33.491: INFO: Updating replication controller "condition-test" +STEP: Checking rc "condition-test" has no failure condition set +[AfterEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:40:33.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replication-controller-9172" for this suite. +•{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":346,"completed":153,"skipped":2769,"failed":0} +SSS +------------------------------ +[sig-storage] Projected downwardAPI + should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:40:33.537: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-7973 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 27 14:40:33.747: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6ad05bae-2987-4d37-a5d9-e2c099004b7f" in namespace "projected-7973" to be "Succeeded or Failed" +Oct 27 14:40:33.758: INFO: Pod "downwardapi-volume-6ad05bae-2987-4d37-a5d9-e2c099004b7f": Phase="Pending", Reason="", readiness=false. Elapsed: 11.102807ms +Oct 27 14:40:35.770: INFO: Pod "downwardapi-volume-6ad05bae-2987-4d37-a5d9-e2c099004b7f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023663381s +Oct 27 14:40:37.782: INFO: Pod "downwardapi-volume-6ad05bae-2987-4d37-a5d9-e2c099004b7f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035722518s +STEP: Saw pod success +Oct 27 14:40:37.783: INFO: Pod "downwardapi-volume-6ad05bae-2987-4d37-a5d9-e2c099004b7f" satisfied condition "Succeeded or Failed" +Oct 27 14:40:37.794: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod downwardapi-volume-6ad05bae-2987-4d37-a5d9-e2c099004b7f container client-container: +STEP: delete the pod +Oct 27 14:40:37.871: INFO: Waiting for pod downwardapi-volume-6ad05bae-2987-4d37-a5d9-e2c099004b7f to disappear +Oct 27 14:40:37.882: INFO: Pod downwardapi-volume-6ad05bae-2987-4d37-a5d9-e2c099004b7f no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:40:37.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-7973" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":154,"skipped":2772,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Proxy server + should support proxy with --port 0 [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:40:37.916: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-6101 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should support proxy with --port 0 [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: starting the proxy server +Oct 27 14:40:38.108: INFO: Asynchronously running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-6101 proxy -p 0 --disable-filter' +STEP: curling proxy /api/ output +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:40:38.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-6101" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":346,"completed":155,"skipped":2782,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + pod should support shared volumes between containers [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:40:38.219: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-3144 +STEP: Waiting for a default service account to be provisioned in namespace +[It] pod should support shared volumes between containers [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating Pod +STEP: Reading file content from the nginx-container +Oct 27 14:40:42.459: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-3144 PodName:pod-sharedvolume-8d89f14e-a5ef-4774-a952-5e35278e473f ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 14:40:42.459: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 14:40:42.710: INFO: Exec stderr: "" +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:40:42.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-3144" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":346,"completed":156,"skipped":2814,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a replica set. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:40:42.745: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename resourcequota +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-8688 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should create a ResourceQuota and capture the life of a replica set. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Counting existing ResourceQuota +STEP: Creating a ResourceQuota +STEP: Ensuring resource quota status is calculated +STEP: Creating a ReplicaSet +STEP: Ensuring resource quota status captures replicaset creation +STEP: Deleting a ReplicaSet +STEP: Ensuring resource quota status released usage +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:40:54.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-8688" for this suite. +•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":346,"completed":157,"skipped":2845,"failed":0} +SSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + patching/updating a mutating webhook should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:40:54.093: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-3083 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 27 14:40:54.743: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942454, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942454, loc:(*time.Location)(0xa09bc80)}}, Reason:"NewReplicaSetCreated", Message:"Created new replica set \"sample-webhook-deployment-78988fc6cd\""}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942454, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942454, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} +Oct 27 14:40:56.756: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942454, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942454, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942454, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942454, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 27 14:40:59.774: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] patching/updating a mutating webhook should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a mutating webhook configuration +STEP: Updating a mutating webhook configuration's rules to not include the create operation +STEP: Creating a configMap that should not be mutated +STEP: Patching a mutating webhook configuration's rules to include the create operation +STEP: Creating a configMap that should be mutated +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:41:00.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-3083" for this suite. +STEP: Destroying namespace "webhook-3083-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":346,"completed":158,"skipped":2849,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + removes definition from spec when one version gets changed to not be served [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:41:00.133: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-5712 +STEP: Waiting for a default service account to be provisioned in namespace +[It] removes definition from spec when one version gets changed to not be served [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: set up a multi version CRD +Oct 27 14:41:00.328: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: mark a version not serverd +STEP: check the unserved version gets removed +STEP: check the other version is not changed +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:41:22.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-5712" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":346,"completed":159,"skipped":2883,"failed":0} +SS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with configmap pod [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:41:22.488: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename subpath +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in subpath-6205 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] Atomic writer volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 +STEP: Setting up data +[It] should support subpaths with configmap pod [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating pod pod-subpath-test-configmap-6t6l +STEP: Creating a pod to test atomic-volume-subpath +Oct 27 14:41:22.729: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-6t6l" in namespace "subpath-6205" to be "Succeeded or Failed" +Oct 27 14:41:22.740: INFO: Pod "pod-subpath-test-configmap-6t6l": Phase="Pending", Reason="", readiness=false. Elapsed: 11.545252ms +Oct 27 14:41:24.753: INFO: Pod "pod-subpath-test-configmap-6t6l": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024264106s +Oct 27 14:41:26.766: INFO: Pod "pod-subpath-test-configmap-6t6l": Phase="Running", Reason="", readiness=true. Elapsed: 4.037182354s +Oct 27 14:41:28.779: INFO: Pod "pod-subpath-test-configmap-6t6l": Phase="Running", Reason="", readiness=true. Elapsed: 6.050328331s +Oct 27 14:41:30.793: INFO: Pod "pod-subpath-test-configmap-6t6l": Phase="Running", Reason="", readiness=true. Elapsed: 8.064354654s +Oct 27 14:41:32.806: INFO: Pod "pod-subpath-test-configmap-6t6l": Phase="Running", Reason="", readiness=true. Elapsed: 10.077068354s +Oct 27 14:41:34.852: INFO: Pod "pod-subpath-test-configmap-6t6l": Phase="Running", Reason="", readiness=true. Elapsed: 12.12324505s +Oct 27 14:41:36.864: INFO: Pod "pod-subpath-test-configmap-6t6l": Phase="Running", Reason="", readiness=true. Elapsed: 14.135334443s +Oct 27 14:41:38.878: INFO: Pod "pod-subpath-test-configmap-6t6l": Phase="Running", Reason="", readiness=true. Elapsed: 16.149100262s +Oct 27 14:41:40.891: INFO: Pod "pod-subpath-test-configmap-6t6l": Phase="Running", Reason="", readiness=true. Elapsed: 18.161725591s +Oct 27 14:41:42.905: INFO: Pod "pod-subpath-test-configmap-6t6l": Phase="Running", Reason="", readiness=true. Elapsed: 20.176072083s +Oct 27 14:41:44.918: INFO: Pod "pod-subpath-test-configmap-6t6l": Phase="Running", Reason="", readiness=true. Elapsed: 22.188938789s +Oct 27 14:41:46.931: INFO: Pod "pod-subpath-test-configmap-6t6l": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.201806647s +STEP: Saw pod success +Oct 27 14:41:46.931: INFO: Pod "pod-subpath-test-configmap-6t6l" satisfied condition "Succeeded or Failed" +Oct 27 14:41:46.943: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod pod-subpath-test-configmap-6t6l container test-container-subpath-configmap-6t6l: +STEP: delete the pod +Oct 27 14:41:46.982: INFO: Waiting for pod pod-subpath-test-configmap-6t6l to disappear +Oct 27 14:41:46.994: INFO: Pod pod-subpath-test-configmap-6t6l no longer exists +STEP: Deleting pod pod-subpath-test-configmap-6t6l +Oct 27 14:41:46.994: INFO: Deleting pod "pod-subpath-test-configmap-6t6l" in namespace "subpath-6205" +[AfterEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:41:47.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "subpath-6205" for this suite. +•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":346,"completed":160,"skipped":2885,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:41:47.041: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-7540 +STEP: Waiting for a default service account to be provisioned in namespace +[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir volume type on node default medium +Oct 27 14:41:47.258: INFO: Waiting up to 5m0s for pod "pod-445a4455-e1e8-4a06-b25f-fe3f80e756c9" in namespace "emptydir-7540" to be "Succeeded or Failed" +Oct 27 14:41:47.269: INFO: Pod "pod-445a4455-e1e8-4a06-b25f-fe3f80e756c9": Phase="Pending", Reason="", readiness=false. Elapsed: 11.389329ms +Oct 27 14:41:49.283: INFO: Pod "pod-445a4455-e1e8-4a06-b25f-fe3f80e756c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024677263s +Oct 27 14:41:51.296: INFO: Pod "pod-445a4455-e1e8-4a06-b25f-fe3f80e756c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038379029s +STEP: Saw pod success +Oct 27 14:41:51.296: INFO: Pod "pod-445a4455-e1e8-4a06-b25f-fe3f80e756c9" satisfied condition "Succeeded or Failed" +Oct 27 14:41:51.308: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod pod-445a4455-e1e8-4a06-b25f-fe3f80e756c9 container test-container: +STEP: delete the pod +Oct 27 14:41:51.349: INFO: Waiting for pod pod-445a4455-e1e8-4a06-b25f-fe3f80e756c9 to disappear +Oct 27 14:41:51.360: INFO: Pod pod-445a4455-e1e8-4a06-b25f-fe3f80e756c9 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:41:51.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-7540" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":161,"skipped":2955,"failed":0} +SSSSS +------------------------------ +[sig-node] Kubelet when scheduling a busybox command that always fails in a pod + should have an terminated reason [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:41:51.393: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubelet-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubelet-test-9445 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 +[BeforeEach] when scheduling a busybox command that always fails in a pod + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:82 +[It] should have an terminated reason [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:41:55.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubelet-test-9445" for this suite. +•{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":346,"completed":162,"skipped":2960,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + listing mutating webhooks should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:41:55.672: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-5983 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 27 14:41:56.642: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942516, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942516, loc:(*time.Location)(0xa09bc80)}}, Reason:"NewReplicaSetCreated", Message:"Created new replica set \"sample-webhook-deployment-78988fc6cd\""}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942516, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942516, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} +Oct 27 14:41:58.655: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942516, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942516, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942516, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942516, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 27 14:42:01.674: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] listing mutating webhooks should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Listing all of the created validation webhooks +STEP: Creating a configMap that should be mutated +STEP: Deleting the collection of validation webhooks +STEP: Creating a configMap that should not be mutated +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:42:02.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-5983" for this suite. +STEP: Destroying namespace "webhook-5983-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":346,"completed":163,"skipped":2989,"failed":0} +SSSS +------------------------------ +[sig-cli] Kubectl client Kubectl logs + should be able to retrieve and filter logs [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:42:02.346: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-2760 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[BeforeEach] Kubectl logs + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1396 +STEP: creating an pod +Oct 27 14:42:02.535: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-2760 run logs-generator --image=k8s.gcr.io/e2e-test-images/agnhost:2.32 --restart=Never --pod-running-timeout=2m0s -- logs-generator --log-lines-total 100 --run-duration 20s' +Oct 27 14:42:02.639: INFO: stderr: "" +Oct 27 14:42:02.639: INFO: stdout: "pod/logs-generator created\n" +[It] should be able to retrieve and filter logs [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Waiting for log generator to start. +Oct 27 14:42:02.639: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] +Oct 27 14:42:02.639: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-2760" to be "running and ready, or succeeded" +Oct 27 14:42:02.651: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 12.095179ms +Oct 27 14:42:04.665: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025662275s +Oct 27 14:42:06.678: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.038720535s +Oct 27 14:42:06.678: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" +Oct 27 14:42:06.678: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] +STEP: checking for a matching strings +Oct 27 14:42:06.678: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-2760 logs logs-generator logs-generator' +Oct 27 14:42:06.791: INFO: stderr: "" +Oct 27 14:42:06.791: INFO: stdout: "I1027 14:42:03.940844 1 logs_generator.go:76] 0 POST /api/v1/namespaces/ns/pods/hdv 509\nI1027 14:42:04.141000 1 logs_generator.go:76] 1 POST /api/v1/namespaces/ns/pods/hq5w 253\nI1027 14:42:04.341564 1 logs_generator.go:76] 2 POST /api/v1/namespaces/default/pods/qwm 355\nI1027 14:42:04.540867 1 logs_generator.go:76] 3 POST /api/v1/namespaces/kube-system/pods/mh8 344\nI1027 14:42:04.741247 1 logs_generator.go:76] 4 GET /api/v1/namespaces/default/pods/fj77 230\nI1027 14:42:04.941737 1 logs_generator.go:76] 5 GET /api/v1/namespaces/ns/pods/x2lb 538\nI1027 14:42:05.140964 1 logs_generator.go:76] 6 GET /api/v1/namespaces/kube-system/pods/dtj 325\nI1027 14:42:05.341380 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/kube-system/pods/c4q 321\nI1027 14:42:05.541804 1 logs_generator.go:76] 8 POST /api/v1/namespaces/default/pods/24b 585\nI1027 14:42:05.741239 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/default/pods/vpld 276\nI1027 14:42:05.941652 1 logs_generator.go:76] 10 POST /api/v1/namespaces/ns/pods/mldj 569\nI1027 14:42:06.140948 1 logs_generator.go:76] 11 POST /api/v1/namespaces/kube-system/pods/9fn9 246\nI1027 14:42:06.341334 1 logs_generator.go:76] 12 GET /api/v1/namespaces/default/pods/2gm4 335\nI1027 14:42:06.541761 1 logs_generator.go:76] 13 GET /api/v1/namespaces/ns/pods/hb9n 438\nI1027 14:42:06.740968 1 logs_generator.go:76] 14 GET /api/v1/namespaces/kube-system/pods/69s7 476\n" +STEP: limiting log lines +Oct 27 14:42:06.791: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-2760 logs logs-generator logs-generator --tail=1' +Oct 27 14:42:06.915: INFO: stderr: "" +Oct 27 14:42:06.915: INFO: stdout: "I1027 14:42:06.740968 1 logs_generator.go:76] 14 GET /api/v1/namespaces/kube-system/pods/69s7 476\n" +Oct 27 14:42:06.915: INFO: got output "I1027 14:42:06.740968 1 logs_generator.go:76] 14 GET /api/v1/namespaces/kube-system/pods/69s7 476\n" +STEP: limiting log bytes +Oct 27 14:42:06.915: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-2760 logs logs-generator logs-generator --limit-bytes=1' +Oct 27 14:42:07.036: INFO: stderr: "" +Oct 27 14:42:07.036: INFO: stdout: "I" +Oct 27 14:42:07.036: INFO: got output "I" +STEP: exposing timestamps +Oct 27 14:42:07.036: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-2760 logs logs-generator logs-generator --tail=1 --timestamps' +Oct 27 14:42:07.151: INFO: stderr: "" +Oct 27 14:42:07.151: INFO: stdout: "2021-10-27T14:42:07.141694933Z I1027 14:42:07.141552 1 logs_generator.go:76] 16 GET /api/v1/namespaces/default/pods/4qj 355\n" +Oct 27 14:42:07.151: INFO: got output "2021-10-27T14:42:07.141694933Z I1027 14:42:07.141552 1 logs_generator.go:76] 16 GET /api/v1/namespaces/default/pods/4qj 355\n" +STEP: restricting to a time range +Oct 27 14:42:09.652: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-2760 logs logs-generator logs-generator --since=1s' +Oct 27 14:42:09.761: INFO: stderr: "" +Oct 27 14:42:09.761: INFO: stdout: "I1027 14:42:08.940939 1 logs_generator.go:76] 25 GET /api/v1/namespaces/kube-system/pods/f4j 320\nI1027 14:42:09.141304 1 logs_generator.go:76] 26 PUT /api/v1/namespaces/ns/pods/gp2 263\nI1027 14:42:09.341743 1 logs_generator.go:76] 27 GET /api/v1/namespaces/ns/pods/74wd 489\nI1027 14:42:09.540985 1 logs_generator.go:76] 28 POST /api/v1/namespaces/ns/pods/l2s 393\nI1027 14:42:09.741422 1 logs_generator.go:76] 29 PUT /api/v1/namespaces/ns/pods/pqp2 563\n" +Oct 27 14:42:09.762: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-2760 logs logs-generator logs-generator --since=24h' +Oct 27 14:42:09.870: INFO: stderr: "" +Oct 27 14:42:09.870: INFO: stdout: "I1027 14:42:03.940844 1 logs_generator.go:76] 0 POST /api/v1/namespaces/ns/pods/hdv 509\nI1027 14:42:04.141000 1 logs_generator.go:76] 1 POST /api/v1/namespaces/ns/pods/hq5w 253\nI1027 14:42:04.341564 1 logs_generator.go:76] 2 POST /api/v1/namespaces/default/pods/qwm 355\nI1027 14:42:04.540867 1 logs_generator.go:76] 3 POST /api/v1/namespaces/kube-system/pods/mh8 344\nI1027 14:42:04.741247 1 logs_generator.go:76] 4 GET /api/v1/namespaces/default/pods/fj77 230\nI1027 14:42:04.941737 1 logs_generator.go:76] 5 GET /api/v1/namespaces/ns/pods/x2lb 538\nI1027 14:42:05.140964 1 logs_generator.go:76] 6 GET /api/v1/namespaces/kube-system/pods/dtj 325\nI1027 14:42:05.341380 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/kube-system/pods/c4q 321\nI1027 14:42:05.541804 1 logs_generator.go:76] 8 POST /api/v1/namespaces/default/pods/24b 585\nI1027 14:42:05.741239 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/default/pods/vpld 276\nI1027 14:42:05.941652 1 logs_generator.go:76] 10 POST /api/v1/namespaces/ns/pods/mldj 569\nI1027 14:42:06.140948 1 logs_generator.go:76] 11 POST /api/v1/namespaces/kube-system/pods/9fn9 246\nI1027 14:42:06.341334 1 logs_generator.go:76] 12 GET /api/v1/namespaces/default/pods/2gm4 335\nI1027 14:42:06.541761 1 logs_generator.go:76] 13 GET /api/v1/namespaces/ns/pods/hb9n 438\nI1027 14:42:06.740968 1 logs_generator.go:76] 14 GET /api/v1/namespaces/kube-system/pods/69s7 476\nI1027 14:42:06.941331 1 logs_generator.go:76] 15 GET /api/v1/namespaces/default/pods/4rgs 263\nI1027 14:42:07.141552 1 logs_generator.go:76] 16 GET /api/v1/namespaces/default/pods/4qj 355\nI1027 14:42:07.340924 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/ns/pods/j9p 259\nI1027 14:42:07.541141 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/ns/pods/6np 406\nI1027 14:42:07.742720 1 logs_generator.go:76] 19 GET /api/v1/namespaces/kube-system/pods/ptq 311\nI1027 14:42:07.941026 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/ns/pods/x26n 535\nI1027 14:42:08.141450 1 logs_generator.go:76] 21 POST /api/v1/namespaces/ns/pods/bm2n 444\nI1027 14:42:08.341809 1 logs_generator.go:76] 22 GET /api/v1/namespaces/default/pods/qrz9 466\nI1027 14:42:08.541085 1 logs_generator.go:76] 23 GET /api/v1/namespaces/ns/pods/qmdz 546\nI1027 14:42:08.741446 1 logs_generator.go:76] 24 POST /api/v1/namespaces/kube-system/pods/8rl8 210\nI1027 14:42:08.940939 1 logs_generator.go:76] 25 GET /api/v1/namespaces/kube-system/pods/f4j 320\nI1027 14:42:09.141304 1 logs_generator.go:76] 26 PUT /api/v1/namespaces/ns/pods/gp2 263\nI1027 14:42:09.341743 1 logs_generator.go:76] 27 GET /api/v1/namespaces/ns/pods/74wd 489\nI1027 14:42:09.540985 1 logs_generator.go:76] 28 POST /api/v1/namespaces/ns/pods/l2s 393\nI1027 14:42:09.741422 1 logs_generator.go:76] 29 PUT /api/v1/namespaces/ns/pods/pqp2 563\n" +[AfterEach] Kubectl logs + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1401 +Oct 27 14:42:09.871: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-2760 delete pod logs-generator' +Oct 27 14:42:11.407: INFO: stderr: "" +Oct 27 14:42:11.408: INFO: stdout: "pod \"logs-generator\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:42:11.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-2760" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":346,"completed":164,"skipped":2993,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Security Context When creating a pod with readOnlyRootFilesystem + should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:42:11.442: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename security-context-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-test-4634 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 +[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:42:11.658: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-09b4d94b-716e-431f-81b0-d7a28ddd517e" in namespace "security-context-test-4634" to be "Succeeded or Failed" +Oct 27 14:42:11.670: INFO: Pod "busybox-readonly-false-09b4d94b-716e-431f-81b0-d7a28ddd517e": Phase="Pending", Reason="", readiness=false. Elapsed: 11.704527ms +Oct 27 14:42:13.682: INFO: Pod "busybox-readonly-false-09b4d94b-716e-431f-81b0-d7a28ddd517e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.024308761s +Oct 27 14:42:13.682: INFO: Pod "busybox-readonly-false-09b4d94b-716e-431f-81b0-d7a28ddd517e" satisfied condition "Succeeded or Failed" +[AfterEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:42:13.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "security-context-test-4634" for this suite. +•{"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":346,"completed":165,"skipped":3025,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-scheduling] SchedulerPreemption [Serial] + validates basic preemption works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:42:13.717: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename sched-preemption +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-preemption-9539 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 +Oct 27 14:42:13.948: INFO: Waiting up to 1m0s for all nodes to be ready +Oct 27 14:43:14.056: INFO: Waiting for terminating namespaces to be deleted... +[It] validates basic preemption works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Create pods that use 4/5 of node resources. +Oct 27 14:43:14.115: INFO: Created pod: pod0-0-sched-preemption-low-priority +Oct 27 14:43:14.138: INFO: Created pod: pod0-1-sched-preemption-medium-priority +Oct 27 14:43:14.177: INFO: Created pod: pod1-0-sched-preemption-medium-priority +Oct 27 14:43:14.196: INFO: Created pod: pod1-1-sched-preemption-medium-priority +STEP: Wait for pods to be scheduled. +STEP: Run a high priority pod that has same requirements as that of lower priority pod +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:43:24.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-preemption-9539" for this suite. +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 +•{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":346,"completed":166,"skipped":3073,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition + getting/updating/patching custom resource definition status sub-resource works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:43:24.469: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename custom-resource-definition +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in custom-resource-definition-2954 +STEP: Waiting for a default service account to be provisioned in namespace +[It] getting/updating/patching custom resource definition status sub-resource works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:43:25.139: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:43:25.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "custom-resource-definition-2954" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":346,"completed":167,"skipped":3102,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook + should execute prestop exec hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Container Lifecycle Hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:43:25.884: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-lifecycle-hook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-lifecycle-hook-1078 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] when create a pod with lifecycle hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 +STEP: create the container to handle the HTTPGet hook request. +Oct 27 14:43:26.530: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:43:28.543: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) +[It] should execute prestop exec hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the pod with lifecycle hook +Oct 27 14:43:28.584: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:43:30.597: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:43:32.597: INFO: The status of Pod pod-with-prestop-exec-hook is Running (Ready = true) +STEP: delete the pod with lifecycle hook +Oct 27 14:43:32.621: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Oct 27 14:43:32.632: INFO: Pod pod-with-prestop-exec-hook still exists +Oct 27 14:43:34.633: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Oct 27 14:43:34.646: INFO: Pod pod-with-prestop-exec-hook still exists +Oct 27 14:43:36.634: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Oct 27 14:43:36.646: INFO: Pod pod-with-prestop-exec-hook no longer exists +STEP: check prestop hook +[AfterEach] [sig-node] Container Lifecycle Hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:43:36.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-lifecycle-hook-1078" for this suite. +•{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":346,"completed":168,"skipped":3157,"failed":0} +SSSSSSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + should list, patch and delete a collection of StatefulSets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:43:36.739: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename statefulset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-9475 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:92 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:107 +STEP: Creating service test in namespace statefulset-9475 +[It] should list, patch and delete a collection of StatefulSets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:43:36.968: INFO: Found 0 stateful pods, waiting for 1 +Oct 27 14:43:46.982: INFO: Waiting for pod test-ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: patching the StatefulSet +Oct 27 14:43:47.050: INFO: Waiting for pod test-ss-0 to enter Running - Ready=true, currently Running - Ready=true +Oct 27 14:43:47.050: INFO: Waiting for pod test-ss-1 to enter Running - Ready=true, currently Pending - Ready=false +Oct 27 14:43:57.063: INFO: Waiting for pod test-ss-0 to enter Running - Ready=true, currently Running - Ready=true +Oct 27 14:43:57.063: INFO: Waiting for pod test-ss-1 to enter Running - Ready=true, currently Running - Ready=true +STEP: Listing all StatefulSets +STEP: Delete all of the StatefulSets +STEP: Verify that StatefulSets have been deleted +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118 +Oct 27 14:43:57.122: INFO: Deleting all statefulset in ns statefulset-9475 +[AfterEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:43:57.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-9475" for this suite. +•{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should list, patch and delete a collection of StatefulSets [Conformance]","total":346,"completed":169,"skipped":3165,"failed":0} + +------------------------------ +[sig-apps] CronJob + should replace jobs when ReplaceConcurrent [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:43:57.190: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename cronjob +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in cronjob-3377 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should replace jobs when ReplaceConcurrent [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a ReplaceConcurrent cronjob +STEP: Ensuring a job is scheduled +STEP: Ensuring exactly one is scheduled +STEP: Ensuring exactly one running job exists by listing jobs explicitly +STEP: Ensuring the job is replaced with a new one +STEP: Removing cronjob +[AfterEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:45:01.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "cronjob-3377" for this suite. +•{"msg":"PASSED [sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","total":346,"completed":170,"skipped":3165,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Deployment + RecreateDeployment should delete old pods and create new ones [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:45:01.493: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename deployment +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in deployment-5707 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 +[It] RecreateDeployment should delete old pods and create new ones [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:45:01.688: INFO: Creating deployment "test-recreate-deployment" +Oct 27 14:45:01.701: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 +Oct 27 14:45:01.723: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created +Oct 27 14:45:03.748: INFO: Waiting deployment "test-recreate-deployment" to complete +Oct 27 14:45:03.759: INFO: Triggering a new rollout for deployment "test-recreate-deployment" +Oct 27 14:45:03.785: INFO: Updating deployment test-recreate-deployment +Oct 27 14:45:03.785: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 +Oct 27 14:45:03.862: INFO: Deployment "test-recreate-deployment": +&Deployment{ObjectMeta:{test-recreate-deployment deployment-5707 f243ed12-4a65-49da-bf3d-06dc13237e63 23412 2 2021-10-27 14:45:01 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-10-27 14:45:03 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 14:45:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002e7f248 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2021-10-27 14:45:03 +0000 UTC,LastTransitionTime:2021-10-27 14:45:03 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-85d47dcb4" is progressing.,LastUpdateTime:2021-10-27 14:45:03 +0000 UTC,LastTransitionTime:2021-10-27 14:45:01 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} + +Oct 27 14:45:03.873: INFO: New ReplicaSet "test-recreate-deployment-85d47dcb4" of Deployment "test-recreate-deployment": +&ReplicaSet{ObjectMeta:{test-recreate-deployment-85d47dcb4 deployment-5707 fe19cb15-66b6-46e5-87ad-01bc59629273 23411 1 2021-10-27 14:45:03 +0000 UTC map[name:sample-pod-3 pod-template-hash:85d47dcb4] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment f243ed12-4a65-49da-bf3d-06dc13237e63 0xc0035e7ee0 0xc0035e7ee1}] [] [{kube-controller-manager Update apps/v1 2021-10-27 14:45:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f243ed12-4a65-49da-bf3d-06dc13237e63\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 14:45:03 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 85d47dcb4,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:85d47dcb4] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0035e7f78 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Oct 27 14:45:03.873: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": +Oct 27 14:45:03.874: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-6cb8b65c46 deployment-5707 da7f2227-7fb7-42c2-9d07-60ee22a91065 23404 2 2021-10-27 14:45:01 +0000 UTC map[name:sample-pod-3 pod-template-hash:6cb8b65c46] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment f243ed12-4a65-49da-bf3d-06dc13237e63 0xc0035e7bc7 0xc0035e7bc8}] [] [{kube-controller-manager Update apps/v1 2021-10-27 14:45:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f243ed12-4a65-49da-bf3d-06dc13237e63\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 14:45:03 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6cb8b65c46,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:6cb8b65c46] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0035e7e78 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Oct 27 14:45:03.885: INFO: Pod "test-recreate-deployment-85d47dcb4-74x9s" is not available: +&Pod{ObjectMeta:{test-recreate-deployment-85d47dcb4-74x9s test-recreate-deployment-85d47dcb4- deployment-5707 d306940d-ed6b-44f7-8e25-30fd0b3d3735 23413 0 2021-10-27 14:45:03 +0000 UTC map[name:sample-pod-3 pod-template-hash:85d47dcb4] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet test-recreate-deployment-85d47dcb4 fe19cb15-66b6-46e5-87ad-01bc59629273 0xc00753c450 0xc00753c451}] [] [{kube-controller-manager Update v1 2021-10-27 14:45:03 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe19cb15-66b6-46e5-87ad-01bc59629273\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 14:45:03 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-gt9rc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmtq6-2hp.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gt9rc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:45:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:45:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:45:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:45:03 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.3,PodIP:,StartTime:2021-10-27 14:45:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:45:03.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-5707" for this suite. +•{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":346,"completed":171,"skipped":3189,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-instrumentation] Events API + should ensure that an event can be fetched, patched, deleted, and listed [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-instrumentation] Events API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:45:03.919: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename events +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in events-4940 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-instrumentation] Events API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 +[It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a test event +STEP: listing events in all namespaces +STEP: listing events in test namespace +STEP: listing events with field selection filtering on source +STEP: listing events with field selection filtering on reportingController +STEP: getting the test event +STEP: patching the test event +STEP: getting the test event +STEP: updating the test event +STEP: getting the test event +STEP: deleting the test event +STEP: listing events in all namespaces +STEP: listing events in test namespace +[AfterEach] [sig-instrumentation] Events API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:45:04.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "events-4940" for this suite. +•{"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":346,"completed":172,"skipped":3199,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide container's cpu limit [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:45:04.321: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-5712 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should provide container's cpu limit [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 27 14:45:04.537: INFO: Waiting up to 5m0s for pod "downwardapi-volume-66fbbc30-5282-47c2-98ee-fa927f04a3ba" in namespace "downward-api-5712" to be "Succeeded or Failed" +Oct 27 14:45:04.548: INFO: Pod "downwardapi-volume-66fbbc30-5282-47c2-98ee-fa927f04a3ba": Phase="Pending", Reason="", readiness=false. Elapsed: 11.070316ms +Oct 27 14:45:06.559: INFO: Pod "downwardapi-volume-66fbbc30-5282-47c2-98ee-fa927f04a3ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.02234728s +STEP: Saw pod success +Oct 27 14:45:06.559: INFO: Pod "downwardapi-volume-66fbbc30-5282-47c2-98ee-fa927f04a3ba" satisfied condition "Succeeded or Failed" +Oct 27 14:45:06.570: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod downwardapi-volume-66fbbc30-5282-47c2-98ee-fa927f04a3ba container client-container: +STEP: delete the pod +Oct 27 14:45:06.647: INFO: Waiting for pod downwardapi-volume-66fbbc30-5282-47c2-98ee-fa927f04a3ba to disappear +Oct 27 14:45:06.661: INFO: Pod downwardapi-volume-66fbbc30-5282-47c2-98ee-fa927f04a3ba no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:45:06.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-5712" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":346,"completed":173,"skipped":3276,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:45:06.694: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-6101 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating projection with secret that has name projected-secret-test-0bb0699c-2588-4d7f-b3c1-86508c196627 +STEP: Creating a pod to test consume secrets +Oct 27 14:45:06.944: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e1c1439b-d6c3-4e13-97c7-1c7c866fdd41" in namespace "projected-6101" to be "Succeeded or Failed" +Oct 27 14:45:06.956: INFO: Pod "pod-projected-secrets-e1c1439b-d6c3-4e13-97c7-1c7c866fdd41": Phase="Pending", Reason="", readiness=false. Elapsed: 11.600518ms +Oct 27 14:45:08.968: INFO: Pod "pod-projected-secrets-e1c1439b-d6c3-4e13-97c7-1c7c866fdd41": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.023672178s +STEP: Saw pod success +Oct 27 14:45:08.968: INFO: Pod "pod-projected-secrets-e1c1439b-d6c3-4e13-97c7-1c7c866fdd41" satisfied condition "Succeeded or Failed" +Oct 27 14:45:09.029: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod pod-projected-secrets-e1c1439b-d6c3-4e13-97c7-1c7c866fdd41 container projected-secret-volume-test: +STEP: delete the pod +Oct 27 14:45:09.134: INFO: Waiting for pod pod-projected-secrets-e1c1439b-d6c3-4e13-97c7-1c7c866fdd41 to disappear +Oct 27 14:45:09.148: INFO: Pod pod-projected-secrets-e1c1439b-d6c3-4e13-97c7-1c7c866fdd41 no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:45:09.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-6101" for this suite. +•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":346,"completed":174,"skipped":3288,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] DisruptionController + should update/patch PodDisruptionBudget status [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:45:09.240: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename disruption +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in disruption-7489 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 +[It] should update/patch PodDisruptionBudget status [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Waiting for the pdb to be processed +STEP: Updating PodDisruptionBudget status +STEP: Waiting for all pods to be running +Oct 27 14:45:11.941: INFO: running pods: 0 < 1 +Oct 27 14:45:13.954: INFO: running pods: 0 < 1 +STEP: locating a running pod +STEP: Waiting for the pdb to be processed +STEP: Patching PodDisruptionBudget status +STEP: Waiting for the pdb to be processed +[AfterEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:45:16.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "disruption-7489" for this suite. +•{"msg":"PASSED [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]","total":346,"completed":175,"skipped":3344,"failed":0} +SSSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:45:16.086: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-145 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating projection with secret that has name projected-secret-test-map-ec6cc0d1-338b-4074-9fd6-1cbd5c843d67 +STEP: Creating a pod to test consume secrets +Oct 27 14:45:16.313: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-389d3357-b9c7-4d58-b7c5-d9f8706d5b07" in namespace "projected-145" to be "Succeeded or Failed" +Oct 27 14:45:16.324: INFO: Pod "pod-projected-secrets-389d3357-b9c7-4d58-b7c5-d9f8706d5b07": Phase="Pending", Reason="", readiness=false. Elapsed: 10.688276ms +Oct 27 14:45:18.335: INFO: Pod "pod-projected-secrets-389d3357-b9c7-4d58-b7c5-d9f8706d5b07": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.022170202s +STEP: Saw pod success +Oct 27 14:45:18.335: INFO: Pod "pod-projected-secrets-389d3357-b9c7-4d58-b7c5-d9f8706d5b07" satisfied condition "Succeeded or Failed" +Oct 27 14:45:18.347: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod pod-projected-secrets-389d3357-b9c7-4d58-b7c5-d9f8706d5b07 container projected-secret-volume-test: +STEP: delete the pod +Oct 27 14:45:18.425: INFO: Waiting for pod pod-projected-secrets-389d3357-b9c7-4d58-b7c5-d9f8706d5b07 to disappear +Oct 27 14:45:18.436: INFO: Pod pod-projected-secrets-389d3357-b9c7-4d58-b7c5-d9f8706d5b07 no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:45:18.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-145" for this suite. +•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":346,"completed":176,"skipped":3350,"failed":0} +SSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for multiple CRDs of same group and version but different kinds [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:45:18.471: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-4281 +STEP: Waiting for a default service account to be provisioned in namespace +[It] works for multiple CRDs of same group and version but different kinds [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation +Oct 27 14:45:18.662: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 14:45:22.950: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:45:39.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-4281" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":346,"completed":177,"skipped":3353,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-node] Docker Containers + should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Docker Containers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:45:39.869: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename containers +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in containers-1287 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test override arguments +Oct 27 14:45:40.090: INFO: Waiting up to 5m0s for pod "client-containers-3e20483d-c1c4-4275-8405-d82d73a028ad" in namespace "containers-1287" to be "Succeeded or Failed" +Oct 27 14:45:40.101: INFO: Pod "client-containers-3e20483d-c1c4-4275-8405-d82d73a028ad": Phase="Pending", Reason="", readiness=false. Elapsed: 11.143422ms +Oct 27 14:45:42.115: INFO: Pod "client-containers-3e20483d-c1c4-4275-8405-d82d73a028ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024330207s +Oct 27 14:45:44.130: INFO: Pod "client-containers-3e20483d-c1c4-4275-8405-d82d73a028ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039589276s +STEP: Saw pod success +Oct 27 14:45:44.130: INFO: Pod "client-containers-3e20483d-c1c4-4275-8405-d82d73a028ad" satisfied condition "Succeeded or Failed" +Oct 27 14:45:44.142: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod client-containers-3e20483d-c1c4-4275-8405-d82d73a028ad container agnhost-container: +STEP: delete the pod +Oct 27 14:45:44.182: INFO: Waiting for pod client-containers-3e20483d-c1c4-4275-8405-d82d73a028ad to disappear +Oct 27 14:45:44.194: INFO: Pod client-containers-3e20483d-c1c4-4275-8405-d82d73a028ad no longer exists +[AfterEach] [sig-node] Docker Containers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:45:44.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "containers-1287" for this suite. +•{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":346,"completed":178,"skipped":3366,"failed":0} +SSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should deny crd creation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:45:44.229: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-4105 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 27 14:45:44.931: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 27 14:45:47.985: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should deny crd creation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Registering the crd webhook via the AdmissionRegistration API +STEP: Creating a custom resource definition that should be denied by the webhook +Oct 27 14:45:48.087: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:45:48.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-4105" for this suite. +STEP: Destroying namespace "webhook-4105-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":346,"completed":179,"skipped":3372,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should serve a basic endpoint from pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:45:48.297: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-5277 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should serve a basic endpoint from pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating service endpoint-test2 in namespace services-5277 +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5277 to expose endpoints map[] +Oct 27 14:45:48.564: INFO: successfully validated that service endpoint-test2 in namespace services-5277 exposes endpoints map[] +STEP: Creating pod pod1 in namespace services-5277 +Oct 27 14:45:48.597: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:45:50.610: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:45:52.609: INFO: The status of Pod pod1 is Running (Ready = true) +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5277 to expose endpoints map[pod1:[80]] +Oct 27 14:45:52.665: INFO: successfully validated that service endpoint-test2 in namespace services-5277 exposes endpoints map[pod1:[80]] +STEP: Checking if the Service forwards traffic to pod1 +Oct 27 14:45:52.666: INFO: Creating new exec pod +Oct 27 14:45:57.709: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-5277 exec execpodncfzv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80' +Oct 27 14:45:58.383: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 endpoint-test2 80\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n" +Oct 27 14:45:58.383: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 14:45:58.383: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-5277 exec execpodncfzv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.67.121.76 80' +Oct 27 14:45:58.745: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 100.67.121.76 80\nConnection to 100.67.121.76 80 port [tcp/http] succeeded!\n" +Oct 27 14:45:58.745: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +STEP: Creating pod pod2 in namespace services-5277 +Oct 27 14:45:58.775: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:46:00.789: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:46:02.787: INFO: The status of Pod pod2 is Running (Ready = true) +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5277 to expose endpoints map[pod1:[80] pod2:[80]] +Oct 27 14:46:02.854: INFO: successfully validated that service endpoint-test2 in namespace services-5277 exposes endpoints map[pod1:[80] pod2:[80]] +STEP: Checking if the Service forwards traffic to pod1 and pod2 +Oct 27 14:46:03.855: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-5277 exec execpodncfzv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80' +Oct 27 14:46:04.249: INFO: stderr: "+ + ncecho -v hostName -t\n -w 2 endpoint-test2 80\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n" +Oct 27 14:46:04.249: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 14:46:04.249: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-5277 exec execpodncfzv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.67.121.76 80' +Oct 27 14:46:04.602: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 100.67.121.76 80\nConnection to 100.67.121.76 80 port [tcp/http] succeeded!\n" +Oct 27 14:46:04.602: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +STEP: Deleting pod pod1 in namespace services-5277 +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5277 to expose endpoints map[pod2:[80]] +Oct 27 14:46:04.663: INFO: successfully validated that service endpoint-test2 in namespace services-5277 exposes endpoints map[pod2:[80]] +STEP: Checking if the Service forwards traffic to pod2 +Oct 27 14:46:05.663: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-5277 exec execpodncfzv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80' +Oct 27 14:46:06.019: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 endpoint-test2 80\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n" +Oct 27 14:46:06.019: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 14:46:06.019: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-5277 exec execpodncfzv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.67.121.76 80' +Oct 27 14:46:06.402: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 100.67.121.76 80\nConnection to 100.67.121.76 80 port [tcp/http] succeeded!\n" +Oct 27 14:46:06.402: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +STEP: Deleting pod pod2 in namespace services-5277 +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5277 to expose endpoints map[] +Oct 27 14:46:07.486: INFO: successfully validated that service endpoint-test2 in namespace services-5277 exposes endpoints map[] +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:46:07.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-5277" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":346,"completed":180,"skipped":3395,"failed":0} +SSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide container's cpu request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:46:07.570: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-112 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should provide container's cpu request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 27 14:46:07.823: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f8a5dfbc-c201-4fae-bea4-a77434a30abc" in namespace "projected-112" to be "Succeeded or Failed" +Oct 27 14:46:07.833: INFO: Pod "downwardapi-volume-f8a5dfbc-c201-4fae-bea4-a77434a30abc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.862301ms +Oct 27 14:46:09.848: INFO: Pod "downwardapi-volume-f8a5dfbc-c201-4fae-bea4-a77434a30abc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025452147s +Oct 27 14:46:11.861: INFO: Pod "downwardapi-volume-f8a5dfbc-c201-4fae-bea4-a77434a30abc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038671573s +STEP: Saw pod success +Oct 27 14:46:11.861: INFO: Pod "downwardapi-volume-f8a5dfbc-c201-4fae-bea4-a77434a30abc" satisfied condition "Succeeded or Failed" +Oct 27 14:46:11.873: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod downwardapi-volume-f8a5dfbc-c201-4fae-bea4-a77434a30abc container client-container: +STEP: delete the pod +Oct 27 14:46:11.948: INFO: Waiting for pod downwardapi-volume-f8a5dfbc-c201-4fae-bea4-a77434a30abc to disappear +Oct 27 14:46:11.959: INFO: Pod downwardapi-volume-f8a5dfbc-c201-4fae-bea4-a77434a30abc no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:46:11.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-112" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":346,"completed":181,"skipped":3401,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:46:11.994: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-44 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name projected-configmap-test-volume-map-1b2e9fa9-7ad8-4b98-9bed-d80f507ed83b +STEP: Creating a pod to test consume configMaps +Oct 27 14:46:12.224: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b439d956-6b3e-49d4-be3a-f054bcb479a3" in namespace "projected-44" to be "Succeeded or Failed" +Oct 27 14:46:12.235: INFO: Pod "pod-projected-configmaps-b439d956-6b3e-49d4-be3a-f054bcb479a3": Phase="Pending", Reason="", readiness=false. Elapsed: 11.348778ms +Oct 27 14:46:14.248: INFO: Pod "pod-projected-configmaps-b439d956-6b3e-49d4-be3a-f054bcb479a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024102906s +Oct 27 14:46:16.261: INFO: Pod "pod-projected-configmaps-b439d956-6b3e-49d4-be3a-f054bcb479a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036850537s +STEP: Saw pod success +Oct 27 14:46:16.261: INFO: Pod "pod-projected-configmaps-b439d956-6b3e-49d4-be3a-f054bcb479a3" satisfied condition "Succeeded or Failed" +Oct 27 14:46:16.272: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod pod-projected-configmaps-b439d956-6b3e-49d4-be3a-f054bcb479a3 container agnhost-container: +STEP: delete the pod +Oct 27 14:46:16.318: INFO: Waiting for pod pod-projected-configmaps-b439d956-6b3e-49d4-be3a-f054bcb479a3 to disappear +Oct 27 14:46:16.329: INFO: Pod pod-projected-configmaps-b439d956-6b3e-49d4-be3a-f054bcb479a3 no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:46:16.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-44" for this suite. +•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":346,"completed":182,"skipped":3469,"failed":0} +SSSSS +------------------------------ +[sig-node] InitContainer [NodeConformance] + should not start app containers if init containers fail on a RestartAlways pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:46:16.364: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename init-container +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in init-container-8324 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 +[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pod +Oct 27 14:46:16.553: INFO: PodSpec: initContainers in spec.initContainers +Oct 27 14:46:58.521: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-cd4d737f-91e4-42a2-acb2-68e955bcfa59", GenerateName:"", Namespace:"init-container-8324", SelfLink:"", UID:"da493f80-ec3b-4717-98c2-975b77ea5391", ResourceVersion:"24334", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63770942776, loc:(*time.Location)(0xa09bc80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"553906336"}, Annotations:map[string]string{"cni.projectcalico.org/containerID":"ac340f354c225d733cf686b8ce75fc4def651581be40cd78ede8449d677d35fb", "cni.projectcalico.org/podIP":"100.96.1.180/32", "cni.projectcalico.org/podIPs":"100.96.1.180/32", "kubernetes.io/psp":"e2e-test-privileged-psp"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004065fc8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0040d6000), Subresource:""}, v1.ManagedFieldsEntry{Manager:"calico", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0040d6018), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0040d6030), Subresource:"status"}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0040d6048), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0040d6060), Subresource:"status"}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-api-access-q7nbw", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc004cc4420), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"KUBERNETES_SERVICE_HOST", Value:"api.tmtq6-2hp.it.internal.staging.k8s.ondemand.com", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-q7nbw", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"KUBERNETES_SERVICE_HOST", Value:"api.tmtq6-2hp.it.internal.staging.k8s.ondemand.com", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-q7nbw", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.5", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"KUBERNETES_SERVICE_HOST", Value:"api.tmtq6-2hp.it.internal.staging.k8s.ondemand.com", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-q7nbw", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc005a67df8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc003418bd0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc005a67ee0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc005a67f40)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc005a67f48), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc005a67f4c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc005197f30), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942776, loc:(*time.Location)(0xa09bc80)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942776, loc:(*time.Location)(0xa09bc80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942776, loc:(*time.Location)(0xa09bc80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942776, loc:(*time.Location)(0xa09bc80)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.250.0.3", PodIP:"100.96.1.180", PodIPs:[]v1.PodIP{v1.PodIP{IP:"100.96.1.180"}}, StartTime:(*v1.Time)(0xc0040d6090), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc003418cb0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc003418d20)}, Ready:false, RestartCount:3, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"docker-pullable://k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592", ContainerID:"docker://7f49bd8ffecc682975d7a4da3578d29db5d1b5a7053ee93c9b56aa597fbbb975", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc004cc44a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc004cc4480), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.5", ImageID:"", ContainerID:"", Started:(*bool)(0xc005a67fdf)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} +[AfterEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:46:58.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "init-container-8324" for this suite. +•{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":346,"completed":183,"skipped":3474,"failed":0} +SSSSSSSSS +------------------------------ +[sig-apps] Job + should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Job + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:46:58.555: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename job +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in job-6548 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a job +STEP: Ensuring job reaches completions +[AfterEach] [sig-apps] Job + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:47:06.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "job-6548" for this suite. +•{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":346,"completed":184,"skipped":3483,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl cluster-info + should check if Kubernetes control plane services is included in cluster-info [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:47:06.808: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-6266 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should check if Kubernetes control plane services is included in cluster-info [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: validating cluster-info +Oct 27 14:47:06.997: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-6266 cluster-info' +Oct 27 14:47:07.093: INFO: stderr: "" +Oct 27 14:47:07.093: INFO: stdout: "\x1b[0;32mKubernetes control plane\x1b[0m is running at \x1b[0;33mhttps://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:47:07.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-6266" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info [Conformance]","total":346,"completed":185,"skipped":3493,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-node] InitContainer [NodeConformance] + should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:47:07.118: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename init-container +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in init-container-2746 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 +[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pod +Oct 27 14:47:07.308: INFO: PodSpec: initContainers in spec.initContainers +[AfterEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:47:12.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "init-container-2746" for this suite. +•{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":346,"completed":186,"skipped":3507,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:47:12.669: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-2337 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name configmap-test-volume-map-dc614c1d-db6b-4c6a-974d-479bdf7e1679 +STEP: Creating a pod to test consume configMaps +Oct 27 14:47:12.900: INFO: Waiting up to 5m0s for pod "pod-configmaps-c23e568a-8c5f-4aaa-9630-48731dedbb58" in namespace "configmap-2337" to be "Succeeded or Failed" +Oct 27 14:47:12.911: INFO: Pod "pod-configmaps-c23e568a-8c5f-4aaa-9630-48731dedbb58": Phase="Pending", Reason="", readiness=false. Elapsed: 11.336901ms +Oct 27 14:47:14.924: INFO: Pod "pod-configmaps-c23e568a-8c5f-4aaa-9630-48731dedbb58": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.024368549s +STEP: Saw pod success +Oct 27 14:47:14.925: INFO: Pod "pod-configmaps-c23e568a-8c5f-4aaa-9630-48731dedbb58" satisfied condition "Succeeded or Failed" +Oct 27 14:47:14.936: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod pod-configmaps-c23e568a-8c5f-4aaa-9630-48731dedbb58 container agnhost-container: +STEP: delete the pod +Oct 27 14:47:15.015: INFO: Waiting for pod pod-configmaps-c23e568a-8c5f-4aaa-9630-48731dedbb58 to disappear +Oct 27 14:47:15.026: INFO: Pod pod-configmaps-c23e568a-8c5f-4aaa-9630-48731dedbb58 no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:47:15.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-2337" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":346,"completed":187,"skipped":3540,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-network] Networking Granular Checks: Pods + should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Networking + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:47:15.067: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename pod-network-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pod-network-test-5576 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Performing setup for networking test in namespace pod-network-test-5576 +STEP: creating a selector +STEP: Creating the service pods in kubernetes +Oct 27 14:47:15.259: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +Oct 27 14:47:15.332: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:47:17.345: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:47:19.347: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:47:21.441: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:47:23.345: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:47:25.347: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:47:27.345: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:47:29.346: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:47:31.347: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:47:33.345: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:47:35.345: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 14:47:37.345: INFO: The status of Pod netserver-0 is Running (Ready = true) +Oct 27 14:47:37.368: INFO: The status of Pod netserver-1 is Running (Ready = true) +STEP: Creating test pods +Oct 27 14:47:41.474: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 +Oct 27 14:47:41.474: INFO: Going to poll 100.96.0.81 on port 8083 at least 0 times, with a maximum of 34 tries before failing +Oct 27 14:47:41.485: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.0.81:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5576 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 14:47:41.485: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 14:47:41.789: INFO: Found all 1 expected endpoints: [netserver-0] +Oct 27 14:47:41.789: INFO: Going to poll 100.96.1.187 on port 8083 at least 0 times, with a maximum of 34 tries before failing +Oct 27 14:47:41.800: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.187:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5576 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 14:47:41.800: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 14:47:42.110: INFO: Found all 1 expected endpoints: [netserver-1] +[AfterEach] [sig-network] Networking + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:47:42.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pod-network-test-5576" for this suite. +•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":188,"skipped":3555,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should honor timeout [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:47:42.147: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-702 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 27 14:47:42.823: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 27 14:47:45.882: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should honor timeout [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Setting timeout (1s) shorter than webhook latency (5s) +STEP: Registering slow webhook via the AdmissionRegistration API +STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) +STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore +STEP: Registering slow webhook via the AdmissionRegistration API +STEP: Having no error when timeout is longer than webhook latency +STEP: Registering slow webhook via the AdmissionRegistration API +STEP: Having no error when timeout is empty (defaulted to 10s in v1) +STEP: Registering slow webhook via the AdmissionRegistration API +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:47:58.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-702" for this suite. +STEP: Destroying namespace "webhook-702-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":346,"completed":189,"skipped":3583,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-network] DNS + should provide DNS for services [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:47:58.474: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename dns +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-9427 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide DNS for services [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a test headless service +STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9427.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9427.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9427.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9427.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9427.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-9427.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9427.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-9427.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9427.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-9427.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9427.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-9427.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9427.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 73.164.64.100.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/100.64.164.73_udp@PTR;check="$$(dig +tcp +noall +answer +search 73.164.64.100.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/100.64.164.73_tcp@PTR;sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9427.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9427.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9427.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9427.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9427.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-9427.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9427.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-9427.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9427.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-9427.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9427.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-9427.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9427.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 73.164.64.100.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/100.64.164.73_udp@PTR;check="$$(dig +tcp +noall +answer +search 73.164.64.100.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/100.64.164.73_tcp@PTR;sleep 1; done + +STEP: creating a pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Oct 27 14:48:02.954: INFO: Unable to read wheezy_udp@dns-test-service.dns-9427.svc.cluster.local from pod dns-9427/dns-test-73826f98-7013-4286-8861-d4a4257a0e36: the server could not find the requested resource (get pods dns-test-73826f98-7013-4286-8861-d4a4257a0e36) +Oct 27 14:48:02.968: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9427.svc.cluster.local from pod dns-9427/dns-test-73826f98-7013-4286-8861-d4a4257a0e36: the server could not find the requested resource (get pods dns-test-73826f98-7013-4286-8861-d4a4257a0e36) +Oct 27 14:48:03.011: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9427.svc.cluster.local from pod dns-9427/dns-test-73826f98-7013-4286-8861-d4a4257a0e36: the server could not find the requested resource (get pods dns-test-73826f98-7013-4286-8861-d4a4257a0e36) +Oct 27 14:48:03.030: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9427.svc.cluster.local from pod dns-9427/dns-test-73826f98-7013-4286-8861-d4a4257a0e36: the server could not find the requested resource (get pods dns-test-73826f98-7013-4286-8861-d4a4257a0e36) +Oct 27 14:48:03.130: INFO: Unable to read jessie_udp@dns-test-service.dns-9427.svc.cluster.local from pod dns-9427/dns-test-73826f98-7013-4286-8861-d4a4257a0e36: the server could not find the requested resource (get pods dns-test-73826f98-7013-4286-8861-d4a4257a0e36) +Oct 27 14:48:03.145: INFO: Unable to read jessie_tcp@dns-test-service.dns-9427.svc.cluster.local from pod dns-9427/dns-test-73826f98-7013-4286-8861-d4a4257a0e36: the server could not find the requested resource (get pods dns-test-73826f98-7013-4286-8861-d4a4257a0e36) +Oct 27 14:48:03.159: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9427.svc.cluster.local from pod dns-9427/dns-test-73826f98-7013-4286-8861-d4a4257a0e36: the server could not find the requested resource (get pods dns-test-73826f98-7013-4286-8861-d4a4257a0e36) +Oct 27 14:48:03.174: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9427.svc.cluster.local from pod dns-9427/dns-test-73826f98-7013-4286-8861-d4a4257a0e36: the server could not find the requested resource (get pods dns-test-73826f98-7013-4286-8861-d4a4257a0e36) +Oct 27 14:48:03.258: INFO: Lookups using dns-9427/dns-test-73826f98-7013-4286-8861-d4a4257a0e36 failed for: [wheezy_udp@dns-test-service.dns-9427.svc.cluster.local wheezy_tcp@dns-test-service.dns-9427.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9427.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9427.svc.cluster.local jessie_udp@dns-test-service.dns-9427.svc.cluster.local jessie_tcp@dns-test-service.dns-9427.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9427.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9427.svc.cluster.local] + +Oct 27 14:48:08.274: INFO: Unable to read wheezy_udp@dns-test-service.dns-9427.svc.cluster.local from pod dns-9427/dns-test-73826f98-7013-4286-8861-d4a4257a0e36: the server could not find the requested resource (get pods dns-test-73826f98-7013-4286-8861-d4a4257a0e36) +Oct 27 14:48:08.323: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9427.svc.cluster.local from pod dns-9427/dns-test-73826f98-7013-4286-8861-d4a4257a0e36: the server could not find the requested resource (get pods dns-test-73826f98-7013-4286-8861-d4a4257a0e36) +Oct 27 14:48:08.341: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9427.svc.cluster.local from pod dns-9427/dns-test-73826f98-7013-4286-8861-d4a4257a0e36: the server could not find the requested resource (get pods dns-test-73826f98-7013-4286-8861-d4a4257a0e36) +Oct 27 14:48:08.356: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9427.svc.cluster.local from pod dns-9427/dns-test-73826f98-7013-4286-8861-d4a4257a0e36: the server could not find the requested resource (get pods dns-test-73826f98-7013-4286-8861-d4a4257a0e36) +Oct 27 14:48:08.513: INFO: Unable to read jessie_udp@dns-test-service.dns-9427.svc.cluster.local from pod dns-9427/dns-test-73826f98-7013-4286-8861-d4a4257a0e36: the server could not find the requested resource (get pods dns-test-73826f98-7013-4286-8861-d4a4257a0e36) +Oct 27 14:48:08.529: INFO: Unable to read jessie_tcp@dns-test-service.dns-9427.svc.cluster.local from pod dns-9427/dns-test-73826f98-7013-4286-8861-d4a4257a0e36: the server could not find the requested resource (get pods dns-test-73826f98-7013-4286-8861-d4a4257a0e36) +Oct 27 14:48:08.543: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9427.svc.cluster.local from pod dns-9427/dns-test-73826f98-7013-4286-8861-d4a4257a0e36: the server could not find the requested resource (get pods dns-test-73826f98-7013-4286-8861-d4a4257a0e36) +Oct 27 14:48:08.557: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9427.svc.cluster.local from pod dns-9427/dns-test-73826f98-7013-4286-8861-d4a4257a0e36: the server could not find the requested resource (get pods dns-test-73826f98-7013-4286-8861-d4a4257a0e36) +Oct 27 14:48:08.657: INFO: Lookups using dns-9427/dns-test-73826f98-7013-4286-8861-d4a4257a0e36 failed for: [wheezy_udp@dns-test-service.dns-9427.svc.cluster.local wheezy_tcp@dns-test-service.dns-9427.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9427.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9427.svc.cluster.local jessie_udp@dns-test-service.dns-9427.svc.cluster.local jessie_tcp@dns-test-service.dns-9427.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9427.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9427.svc.cluster.local] + +Oct 27 14:48:13.275: INFO: Unable to read wheezy_udp@dns-test-service.dns-9427.svc.cluster.local from pod dns-9427/dns-test-73826f98-7013-4286-8861-d4a4257a0e36: the server could not find the requested resource (get pods dns-test-73826f98-7013-4286-8861-d4a4257a0e36) +Oct 27 14:48:13.290: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9427.svc.cluster.local from pod dns-9427/dns-test-73826f98-7013-4286-8861-d4a4257a0e36: the server could not find the requested resource (get pods dns-test-73826f98-7013-4286-8861-d4a4257a0e36) +Oct 27 14:48:13.335: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9427.svc.cluster.local from pod dns-9427/dns-test-73826f98-7013-4286-8861-d4a4257a0e36: the server could not find the requested resource (get pods dns-test-73826f98-7013-4286-8861-d4a4257a0e36) +Oct 27 14:48:13.350: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9427.svc.cluster.local from pod dns-9427/dns-test-73826f98-7013-4286-8861-d4a4257a0e36: the server could not find the requested resource (get pods dns-test-73826f98-7013-4286-8861-d4a4257a0e36) +Oct 27 14:48:13.487: INFO: Unable to read jessie_udp@dns-test-service.dns-9427.svc.cluster.local from pod dns-9427/dns-test-73826f98-7013-4286-8861-d4a4257a0e36: the server could not find the requested resource (get pods dns-test-73826f98-7013-4286-8861-d4a4257a0e36) +Oct 27 14:48:13.502: INFO: Unable to read jessie_tcp@dns-test-service.dns-9427.svc.cluster.local from pod dns-9427/dns-test-73826f98-7013-4286-8861-d4a4257a0e36: the server could not find the requested resource (get pods dns-test-73826f98-7013-4286-8861-d4a4257a0e36) +Oct 27 14:48:13.517: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9427.svc.cluster.local from pod dns-9427/dns-test-73826f98-7013-4286-8861-d4a4257a0e36: the server could not find the requested resource (get pods dns-test-73826f98-7013-4286-8861-d4a4257a0e36) +Oct 27 14:48:13.533: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9427.svc.cluster.local from pod dns-9427/dns-test-73826f98-7013-4286-8861-d4a4257a0e36: the server could not find the requested resource (get pods dns-test-73826f98-7013-4286-8861-d4a4257a0e36) +Oct 27 14:48:13.621: INFO: Lookups using dns-9427/dns-test-73826f98-7013-4286-8861-d4a4257a0e36 failed for: [wheezy_udp@dns-test-service.dns-9427.svc.cluster.local wheezy_tcp@dns-test-service.dns-9427.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9427.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9427.svc.cluster.local jessie_udp@dns-test-service.dns-9427.svc.cluster.local jessie_tcp@dns-test-service.dns-9427.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9427.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9427.svc.cluster.local] + +Oct 27 14:48:18.274: INFO: Unable to read wheezy_udp@dns-test-service.dns-9427.svc.cluster.local from pod dns-9427/dns-test-73826f98-7013-4286-8861-d4a4257a0e36: the server could not find the requested resource (get pods dns-test-73826f98-7013-4286-8861-d4a4257a0e36) +Oct 27 14:48:18.288: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9427.svc.cluster.local from pod dns-9427/dns-test-73826f98-7013-4286-8861-d4a4257a0e36: the server could not find the requested resource (get pods dns-test-73826f98-7013-4286-8861-d4a4257a0e36) +Oct 27 14:48:18.331: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9427.svc.cluster.local from pod dns-9427/dns-test-73826f98-7013-4286-8861-d4a4257a0e36: the server could not find the requested resource (get pods dns-test-73826f98-7013-4286-8861-d4a4257a0e36) +Oct 27 14:48:18.346: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9427.svc.cluster.local from pod dns-9427/dns-test-73826f98-7013-4286-8861-d4a4257a0e36: the server could not find the requested resource (get pods dns-test-73826f98-7013-4286-8861-d4a4257a0e36) +Oct 27 14:48:18.483: INFO: Unable to read jessie_udp@dns-test-service.dns-9427.svc.cluster.local from pod dns-9427/dns-test-73826f98-7013-4286-8861-d4a4257a0e36: the server could not find the requested resource (get pods dns-test-73826f98-7013-4286-8861-d4a4257a0e36) +Oct 27 14:48:18.497: INFO: Unable to read jessie_tcp@dns-test-service.dns-9427.svc.cluster.local from pod dns-9427/dns-test-73826f98-7013-4286-8861-d4a4257a0e36: the server could not find the requested resource (get pods dns-test-73826f98-7013-4286-8861-d4a4257a0e36) +Oct 27 14:48:18.512: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9427.svc.cluster.local from pod dns-9427/dns-test-73826f98-7013-4286-8861-d4a4257a0e36: the server could not find the requested resource (get pods dns-test-73826f98-7013-4286-8861-d4a4257a0e36) +Oct 27 14:48:18.526: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9427.svc.cluster.local from pod dns-9427/dns-test-73826f98-7013-4286-8861-d4a4257a0e36: the server could not find the requested resource (get pods dns-test-73826f98-7013-4286-8861-d4a4257a0e36) +Oct 27 14:48:18.613: INFO: Lookups using dns-9427/dns-test-73826f98-7013-4286-8861-d4a4257a0e36 failed for: [wheezy_udp@dns-test-service.dns-9427.svc.cluster.local wheezy_tcp@dns-test-service.dns-9427.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9427.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9427.svc.cluster.local jessie_udp@dns-test-service.dns-9427.svc.cluster.local jessie_tcp@dns-test-service.dns-9427.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9427.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9427.svc.cluster.local] + +Oct 27 14:48:23.274: INFO: Unable to read wheezy_udp@dns-test-service.dns-9427.svc.cluster.local from pod dns-9427/dns-test-73826f98-7013-4286-8861-d4a4257a0e36: the server could not find the requested resource (get pods dns-test-73826f98-7013-4286-8861-d4a4257a0e36) +Oct 27 14:48:23.289: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9427.svc.cluster.local from pod dns-9427/dns-test-73826f98-7013-4286-8861-d4a4257a0e36: the server could not find the requested resource (get pods dns-test-73826f98-7013-4286-8861-d4a4257a0e36) +Oct 27 14:48:23.339: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9427.svc.cluster.local from pod dns-9427/dns-test-73826f98-7013-4286-8861-d4a4257a0e36: the server could not find the requested resource (get pods dns-test-73826f98-7013-4286-8861-d4a4257a0e36) +Oct 27 14:48:23.354: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9427.svc.cluster.local from pod dns-9427/dns-test-73826f98-7013-4286-8861-d4a4257a0e36: the server could not find the requested resource (get pods dns-test-73826f98-7013-4286-8861-d4a4257a0e36) +Oct 27 14:48:23.486: INFO: Unable to read jessie_udp@dns-test-service.dns-9427.svc.cluster.local from pod dns-9427/dns-test-73826f98-7013-4286-8861-d4a4257a0e36: the server could not find the requested resource (get pods dns-test-73826f98-7013-4286-8861-d4a4257a0e36) +Oct 27 14:48:23.501: INFO: Unable to read jessie_tcp@dns-test-service.dns-9427.svc.cluster.local from pod dns-9427/dns-test-73826f98-7013-4286-8861-d4a4257a0e36: the server could not find the requested resource (get pods dns-test-73826f98-7013-4286-8861-d4a4257a0e36) +Oct 27 14:48:23.515: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9427.svc.cluster.local from pod dns-9427/dns-test-73826f98-7013-4286-8861-d4a4257a0e36: the server could not find the requested resource (get pods dns-test-73826f98-7013-4286-8861-d4a4257a0e36) +Oct 27 14:48:23.530: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9427.svc.cluster.local from pod dns-9427/dns-test-73826f98-7013-4286-8861-d4a4257a0e36: the server could not find the requested resource (get pods dns-test-73826f98-7013-4286-8861-d4a4257a0e36) +Oct 27 14:48:23.617: INFO: Lookups using dns-9427/dns-test-73826f98-7013-4286-8861-d4a4257a0e36 failed for: [wheezy_udp@dns-test-service.dns-9427.svc.cluster.local wheezy_tcp@dns-test-service.dns-9427.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9427.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9427.svc.cluster.local jessie_udp@dns-test-service.dns-9427.svc.cluster.local jessie_tcp@dns-test-service.dns-9427.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9427.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9427.svc.cluster.local] + +Oct 27 14:48:28.273: INFO: Unable to read wheezy_udp@dns-test-service.dns-9427.svc.cluster.local from pod dns-9427/dns-test-73826f98-7013-4286-8861-d4a4257a0e36: the server could not find the requested resource (get pods dns-test-73826f98-7013-4286-8861-d4a4257a0e36) +Oct 27 14:48:28.319: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9427.svc.cluster.local from pod dns-9427/dns-test-73826f98-7013-4286-8861-d4a4257a0e36: the server could not find the requested resource (get pods dns-test-73826f98-7013-4286-8861-d4a4257a0e36) +Oct 27 14:48:28.333: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9427.svc.cluster.local from pod dns-9427/dns-test-73826f98-7013-4286-8861-d4a4257a0e36: the server could not find the requested resource (get pods dns-test-73826f98-7013-4286-8861-d4a4257a0e36) +Oct 27 14:48:28.348: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9427.svc.cluster.local from pod dns-9427/dns-test-73826f98-7013-4286-8861-d4a4257a0e36: the server could not find the requested resource (get pods dns-test-73826f98-7013-4286-8861-d4a4257a0e36) +Oct 27 14:48:28.477: INFO: Unable to read jessie_udp@dns-test-service.dns-9427.svc.cluster.local from pod dns-9427/dns-test-73826f98-7013-4286-8861-d4a4257a0e36: the server could not find the requested resource (get pods dns-test-73826f98-7013-4286-8861-d4a4257a0e36) +Oct 27 14:48:28.491: INFO: Unable to read jessie_tcp@dns-test-service.dns-9427.svc.cluster.local from pod dns-9427/dns-test-73826f98-7013-4286-8861-d4a4257a0e36: the server could not find the requested resource (get pods dns-test-73826f98-7013-4286-8861-d4a4257a0e36) +Oct 27 14:48:28.505: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9427.svc.cluster.local from pod dns-9427/dns-test-73826f98-7013-4286-8861-d4a4257a0e36: the server could not find the requested resource (get pods dns-test-73826f98-7013-4286-8861-d4a4257a0e36) +Oct 27 14:48:28.519: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9427.svc.cluster.local from pod dns-9427/dns-test-73826f98-7013-4286-8861-d4a4257a0e36: the server could not find the requested resource (get pods dns-test-73826f98-7013-4286-8861-d4a4257a0e36) +Oct 27 14:48:28.605: INFO: Lookups using dns-9427/dns-test-73826f98-7013-4286-8861-d4a4257a0e36 failed for: [wheezy_udp@dns-test-service.dns-9427.svc.cluster.local wheezy_tcp@dns-test-service.dns-9427.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9427.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9427.svc.cluster.local jessie_udp@dns-test-service.dns-9427.svc.cluster.local jessie_tcp@dns-test-service.dns-9427.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9427.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9427.svc.cluster.local] + +Oct 27 14:48:33.607: INFO: DNS probes using dns-9427/dns-test-73826f98-7013-4286-8861-d4a4257a0e36 succeeded + +STEP: deleting the pod +STEP: deleting the test service +STEP: deleting the test headless service +[AfterEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:48:33.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-9427" for this suite. +•{"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":346,"completed":190,"skipped":3594,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:48:33.745: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-3337 +STEP: Waiting for a default service account to be provisioned in namespace +[It] optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name cm-test-opt-del-aa96ac4e-004f-4174-a591-f3da81aae765 +STEP: Creating configMap with name cm-test-opt-upd-43003beb-0b0e-498b-85c5-61134386595a +STEP: Creating the pod +Oct 27 14:48:34.003: INFO: The status of Pod pod-configmaps-c55d8888-173e-4ac5-91cd-fa1ba45cd761 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:48:36.015: INFO: The status of Pod pod-configmaps-c55d8888-173e-4ac5-91cd-fa1ba45cd761 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:48:38.015: INFO: The status of Pod pod-configmaps-c55d8888-173e-4ac5-91cd-fa1ba45cd761 is Running (Ready = true) +STEP: Deleting configmap cm-test-opt-del-aa96ac4e-004f-4174-a591-f3da81aae765 +STEP: Updating configmap cm-test-opt-upd-43003beb-0b0e-498b-85c5-61134386595a +STEP: Creating configMap with name cm-test-opt-create-2d94f802-3bd5-41a5-b6d2-075bebbd65ff +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:48:40.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-3337" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":191,"skipped":3621,"failed":0} + +------------------------------ +[sig-node] Pods + should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:48:40.383: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename pods +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-264 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:188 +[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pod +STEP: submitting the pod to kubernetes +Oct 27 14:48:40.607: INFO: The status of Pod pod-update-activedeadlineseconds-2c409c62-3ed6-42ac-9b92-b979708da931 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:48:42.620: INFO: The status of Pod pod-update-activedeadlineseconds-2c409c62-3ed6-42ac-9b92-b979708da931 is Running (Ready = true) +STEP: verifying the pod is in kubernetes +STEP: updating the pod +Oct 27 14:48:43.186: INFO: Successfully updated pod "pod-update-activedeadlineseconds-2c409c62-3ed6-42ac-9b92-b979708da931" +Oct 27 14:48:43.186: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-2c409c62-3ed6-42ac-9b92-b979708da931" in namespace "pods-264" to be "terminated due to deadline exceeded" +Oct 27 14:48:43.197: INFO: Pod "pod-update-activedeadlineseconds-2c409c62-3ed6-42ac-9b92-b979708da931": Phase="Running", Reason="", readiness=true. Elapsed: 11.172325ms +Oct 27 14:48:45.211: INFO: Pod "pod-update-activedeadlineseconds-2c409c62-3ed6-42ac-9b92-b979708da931": Phase="Running", Reason="", readiness=true. Elapsed: 2.024356985s +Oct 27 14:48:47.223: INFO: Pod "pod-update-activedeadlineseconds-2c409c62-3ed6-42ac-9b92-b979708da931": Phase="Failed", Reason="DeadlineExceeded", readiness=true. Elapsed: 4.036700944s +Oct 27 14:48:47.223: INFO: Pod "pod-update-activedeadlineseconds-2c409c62-3ed6-42ac-9b92-b979708da931" satisfied condition "terminated due to deadline exceeded" +[AfterEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:48:47.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-264" for this suite. +•{"msg":"PASSED [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":346,"completed":192,"skipped":3621,"failed":0} +SSSSSS +------------------------------ +[sig-network] Services + should serve multiport endpoints from pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:48:47.258: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-6767 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should serve multiport endpoints from pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating service multi-endpoint-test in namespace services-6767 +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6767 to expose endpoints map[] +Oct 27 14:48:47.540: INFO: successfully validated that service multi-endpoint-test in namespace services-6767 exposes endpoints map[] +STEP: Creating pod pod1 in namespace services-6767 +Oct 27 14:48:47.570: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:48:49.583: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:48:51.583: INFO: The status of Pod pod1 is Running (Ready = true) +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6767 to expose endpoints map[pod1:[100]] +Oct 27 14:48:51.641: INFO: successfully validated that service multi-endpoint-test in namespace services-6767 exposes endpoints map[pod1:[100]] +STEP: Creating pod pod2 in namespace services-6767 +Oct 27 14:48:51.670: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:48:53.683: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:48:55.682: INFO: The status of Pod pod2 is Running (Ready = true) +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6767 to expose endpoints map[pod1:[100] pod2:[101]] +Oct 27 14:48:55.750: INFO: successfully validated that service multi-endpoint-test in namespace services-6767 exposes endpoints map[pod1:[100] pod2:[101]] +STEP: Checking if the Service forwards traffic to pods +Oct 27 14:48:55.750: INFO: Creating new exec pod +Oct 27 14:49:00.799: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-6767 exec execpodb2gwd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' +Oct 27 14:49:01.157: INFO: stderr: "+ nc -v -t -w 2 multi-endpoint-test 80\n+ echo hostName\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" +Oct 27 14:49:01.158: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 14:49:01.158: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-6767 exec execpodb2gwd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.70.170.134 80' +Oct 27 14:49:01.462: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 100.70.170.134 80\nConnection to 100.70.170.134 80 port [tcp/http] succeeded!\n" +Oct 27 14:49:01.462: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 14:49:01.462: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-6767 exec execpodb2gwd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' +Oct 27 14:49:01.785: INFO: stderr: "+ nc -v -t -w 2 multi-endpoint-test 81\n+ echo hostName\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" +Oct 27 14:49:01.785: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 14:49:01.785: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-6767 exec execpodb2gwd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.70.170.134 81' +Oct 27 14:49:02.120: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 100.70.170.134 81\nConnection to 100.70.170.134 81 port [tcp/*] succeeded!\n" +Oct 27 14:49:02.120: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +STEP: Deleting pod pod1 in namespace services-6767 +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6767 to expose endpoints map[pod2:[101]] +Oct 27 14:49:02.182: INFO: successfully validated that service multi-endpoint-test in namespace services-6767 exposes endpoints map[pod2:[101]] +STEP: Deleting pod pod2 in namespace services-6767 +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6767 to expose endpoints map[] +Oct 27 14:49:02.249: INFO: successfully validated that service multi-endpoint-test in namespace services-6767 exposes endpoints map[] +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:49:02.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-6767" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":346,"completed":193,"skipped":3627,"failed":0} +SS +------------------------------ +[sig-storage] Projected configMap + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:49:02.303: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-6431 +STEP: Waiting for a default service account to be provisioned in namespace +[It] optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name cm-test-opt-del-11bf60df-c4f2-4675-8864-d361a44826c3 +STEP: Creating configMap with name cm-test-opt-upd-bcd51e9c-f846-4148-b4ce-c6d0e99566dd +STEP: Creating the pod +Oct 27 14:49:02.574: INFO: The status of Pod pod-projected-configmaps-9e6b4c61-0a42-479c-9bb6-703e44d8eea2 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:49:04.587: INFO: The status of Pod pod-projected-configmaps-9e6b4c61-0a42-479c-9bb6-703e44d8eea2 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:49:06.587: INFO: The status of Pod pod-projected-configmaps-9e6b4c61-0a42-479c-9bb6-703e44d8eea2 is Running (Ready = true) +STEP: Deleting configmap cm-test-opt-del-11bf60df-c4f2-4675-8864-d361a44826c3 +STEP: Updating configmap cm-test-opt-upd-bcd51e9c-f846-4148-b4ce-c6d0e99566dd +STEP: Creating configMap with name cm-test-opt-create-0dcae1eb-6157-42cd-b43a-1422bed82c93 +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:49:08.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-6431" for this suite. +•{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":194,"skipped":3629,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should complete a service status lifecycle [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:49:08.961: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-7124 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should complete a service status lifecycle [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a Service +STEP: watching for the Service to be added +Oct 27 14:49:09.201: INFO: Found Service test-service-jtgfn in namespace services-7124 with labels: map[test-service-static:true] & ports [{http TCP 80 {0 80 } 0}] +Oct 27 14:49:09.201: INFO: Service test-service-jtgfn created +STEP: Getting /status +Oct 27 14:49:09.213: INFO: Service test-service-jtgfn has LoadBalancer: {[]} +STEP: patching the ServiceStatus +STEP: watching for the Service to be patched +Oct 27 14:49:09.235: INFO: observed Service test-service-jtgfn in namespace services-7124 with annotations: map[] & LoadBalancer: {[]} +Oct 27 14:49:09.235: INFO: Found Service test-service-jtgfn in namespace services-7124 with annotations: map[patchedstatus:true] & LoadBalancer: {[{203.0.113.1 []}]} +Oct 27 14:49:09.235: INFO: Service test-service-jtgfn has service status patched +STEP: updating the ServiceStatus +Oct 27 14:49:09.259: INFO: updatedStatus.Conditions: []v1.Condition{v1.Condition{Type:"StatusUpdate", Status:"True", ObservedGeneration:0, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"E2E", Message:"Set from e2e test"}} +STEP: watching for the Service to be updated +Oct 27 14:49:09.269: INFO: Observed Service test-service-jtgfn in namespace services-7124 with annotations: map[] & Conditions: {[]} +Oct 27 14:49:09.270: INFO: Observed event: &Service{ObjectMeta:{test-service-jtgfn services-7124 5107c497-6bef-42e3-9605-5bbc752df128 25374 0 2021-10-27 14:49:09 +0000 UTC map[test-service-static:true] map[patchedstatus:true] [] [] [{e2e.test Update v1 2021-10-27 14:49:09 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:test-service-static":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:sessionAffinity":{},"f:type":{}}} } {e2e.test Update v1 2021-10-27 14:49:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:patchedstatus":{}}},"f:status":{"f:loadBalancer":{"f:ingress":{}}}} status}]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{0 80 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:100.67.138.46,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[100.67.138.46],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{LoadBalancerIngress{IP:203.0.113.1,Hostname:,Ports:[]PortStatus{},},},},Conditions:[]Condition{},},} +Oct 27 14:49:09.270: INFO: Found Service test-service-jtgfn in namespace services-7124 with annotations: map[patchedstatus:true] & Conditions: [{StatusUpdate True 0 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] +Oct 27 14:49:09.270: INFO: Service test-service-jtgfn has service status updated +STEP: patching the service +STEP: watching for the Service to be patched +Oct 27 14:49:09.294: INFO: observed Service test-service-jtgfn in namespace services-7124 with labels: map[test-service-static:true] +Oct 27 14:49:09.294: INFO: observed Service test-service-jtgfn in namespace services-7124 with labels: map[test-service-static:true] +Oct 27 14:49:09.294: INFO: observed Service test-service-jtgfn in namespace services-7124 with labels: map[test-service-static:true] +Oct 27 14:49:09.294: INFO: Found Service test-service-jtgfn in namespace services-7124 with labels: map[test-service:patched test-service-static:true] +Oct 27 14:49:09.294: INFO: Service test-service-jtgfn patched +STEP: deleting the service +STEP: watching for the Service to be deleted +Oct 27 14:49:09.322: INFO: Observed event: ADDED +Oct 27 14:49:09.322: INFO: Observed event: MODIFIED +Oct 27 14:49:09.322: INFO: Observed event: MODIFIED +Oct 27 14:49:09.322: INFO: Observed event: MODIFIED +Oct 27 14:49:09.322: INFO: Found Service test-service-jtgfn in namespace services-7124 with labels: map[test-service:patched test-service-static:true] & annotations: map[patchedstatus:true] +Oct 27 14:49:09.322: INFO: Service test-service-jtgfn deleted +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:49:09.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-7124" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should complete a service status lifecycle [Conformance]","total":346,"completed":195,"skipped":3705,"failed":0} +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Deployment + Deployment should have a working scale subresource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:49:09.348: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename deployment +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in deployment-3207 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 +[It] Deployment should have a working scale subresource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:49:09.537: INFO: Creating simple deployment test-new-deployment +Oct 27 14:49:09.584: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942949, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942949, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942949, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942949, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-847dcfb7fb\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 27 14:49:11.597: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942949, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942949, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942949, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770942949, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-847dcfb7fb\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: getting scale subresource +STEP: updating a scale subresource +STEP: verifying the deployment Spec.Replicas was modified +STEP: Patch a scale subresource +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 +Oct 27 14:49:13.729: INFO: Deployment "test-new-deployment": +&Deployment{ObjectMeta:{test-new-deployment deployment-3207 f70e538d-f448-4464-9002-a6b9db49215a 25424 3 2021-10-27 14:49:09 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 FieldsV1 {"f:spec":{"f:replicas":{}}} scale} {e2e.test Update apps/v1 2021-10-27 14:49:09 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 14:49:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*4,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0034b24e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-new-deployment-847dcfb7fb" has successfully progressed.,LastUpdateTime:2021-10-27 14:49:12 +0000 UTC,LastTransitionTime:2021-10-27 14:49:09 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2021-10-27 14:49:13 +0000 UTC,LastTransitionTime:2021-10-27 14:49:13 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} + +Oct 27 14:49:13.742: INFO: New ReplicaSet "test-new-deployment-847dcfb7fb" of Deployment "test-new-deployment": +&ReplicaSet{ObjectMeta:{test-new-deployment-847dcfb7fb deployment-3207 0a0bf223-8abc-43a5-86a6-d51f143b8fb2 25426 3 2021-10-27 14:49:09 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[deployment.kubernetes.io/desired-replicas:4 deployment.kubernetes.io/max-replicas:5 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-new-deployment f70e538d-f448-4464-9002-a6b9db49215a 0xc0034b28f7 0xc0034b28f8}] [] [{kube-controller-manager Update apps/v1 2021-10-27 14:49:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f70e538d-f448-4464-9002-a6b9db49215a\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 14:49:12 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*4,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 847dcfb7fb,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0034b2988 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} +Oct 27 14:49:13.755: INFO: Pod "test-new-deployment-847dcfb7fb-79jkb" is available: +&Pod{ObjectMeta:{test-new-deployment-847dcfb7fb-79jkb test-new-deployment-847dcfb7fb- deployment-3207 05354a5c-442c-45ed-9db4-5fb34c9db30b 25412 0 2021-10-27 14:49:09 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/containerID:892d59cf205bf864a03f47546979eaa85e0a03ec92694f8f869e778ed80caa04 cni.projectcalico.org/podIP:100.96.1.196/32 cni.projectcalico.org/podIPs:100.96.1.196/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet test-new-deployment-847dcfb7fb 0a0bf223-8abc-43a5-86a6-d51f143b8fb2 0xc0060b8d57 0xc0060b8d58}] [] [{kube-controller-manager Update v1 2021-10-27 14:49:09 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0a0bf223-8abc-43a5-86a6-d51f143b8fb2\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-27 14:49:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-27 14:49:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.1.196\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-q9v7c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmtq6-2hp.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q9v7c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:49:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:49:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:49:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:49:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.3,PodIP:100.96.1.196,StartTime:2021-10-27 14:49:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-27 14:49:11 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://49fa4ca63d2b8659a4e4ad2a8d5c2b0a1fb70c2f0be11676cecb2d98e1678210,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.1.196,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 14:49:13.755: INFO: Pod "test-new-deployment-847dcfb7fb-c9ql8" is not available: +&Pod{ObjectMeta:{test-new-deployment-847dcfb7fb-c9ql8 test-new-deployment-847dcfb7fb- deployment-3207 37014497-d90f-4106-81ba-4113d1a6834e 25428 0 2021-10-27 14:49:13 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet test-new-deployment-847dcfb7fb 0a0bf223-8abc-43a5-86a6-d51f143b8fb2 0xc0060b8fd0 0xc0060b8fd1}] [] [{kube-controller-manager Update v1 2021-10-27 14:49:13 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0a0bf223-8abc-43a5-86a6-d51f143b8fb2\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 14:49:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-j6n87,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmtq6-2hp.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j6n87,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:49:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:49:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:49:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:49:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.2,PodIP:,StartTime:2021-10-27 14:49:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 14:49:13.755: INFO: Pod "test-new-deployment-847dcfb7fb-sdlmv" is not available: +&Pod{ObjectMeta:{test-new-deployment-847dcfb7fb-sdlmv test-new-deployment-847dcfb7fb- deployment-3207 68b2405e-1957-4490-9a5a-2270d0bf8637 25433 0 2021-10-27 14:49:13 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet test-new-deployment-847dcfb7fb 0a0bf223-8abc-43a5-86a6-d51f143b8fb2 0xc0060b9180 0xc0060b9181}] [] [{kube-controller-manager Update v1 2021-10-27 14:49:13 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0a0bf223-8abc-43a5-86a6-d51f143b8fb2\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-b9vt5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmtq6-2hp.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b9vt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 14:49:13.755: INFO: Pod "test-new-deployment-847dcfb7fb-vxtlj" is not available: +&Pod{ObjectMeta:{test-new-deployment-847dcfb7fb-vxtlj test-new-deployment-847dcfb7fb- deployment-3207 ca69b171-83dd-47ea-839b-2856974e6e5f 25432 0 2021-10-27 14:49:13 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet test-new-deployment-847dcfb7fb 0a0bf223-8abc-43a5-86a6-d51f143b8fb2 0xc0060b92b7 0xc0060b92b8}] [] [{kube-controller-manager Update v1 2021-10-27 14:49:13 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0a0bf223-8abc-43a5-86a6-d51f143b8fb2\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-rn4gm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmtq6-2hp.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rn4gm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 14:49:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:49:13.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-3207" for this suite. +•{"msg":"PASSED [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]","total":346,"completed":196,"skipped":3727,"failed":0} +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-auth] ServiceAccounts + ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:49:13.847: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename svcaccounts +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in svcaccounts-6145 +STEP: Waiting for a default service account to be provisioned in namespace +[It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:49:14.142: INFO: created pod +Oct 27 14:49:14.143: INFO: Waiting up to 5m0s for pod "oidc-discovery-validator" in namespace "svcaccounts-6145" to be "Succeeded or Failed" +Oct 27 14:49:14.153: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 10.718946ms +Oct 27 14:49:16.165: INFO: Pod "oidc-discovery-validator": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.022555813s +STEP: Saw pod success +Oct 27 14:49:16.165: INFO: Pod "oidc-discovery-validator" satisfied condition "Succeeded or Failed" +Oct 27 14:49:46.165: INFO: polling logs +Oct 27 14:49:46.186: INFO: Pod logs: +2021/10/27 14:49:15 OK: Got token +2021/10/27 14:49:15 validating with in-cluster discovery +2021/10/27 14:49:15 OK: got issuer https://api.tmtq6-2hp.it.internal.staging.k8s.ondemand.com +2021/10/27 14:49:15 Full, not-validated claims: +openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://api.tmtq6-2hp.it.internal.staging.k8s.ondemand.com", Subject:"system:serviceaccount:svcaccounts-6145:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1635346754, NotBefore:1635346154, IssuedAt:1635346154, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-6145", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"d0370584-4e01-4a3e-b6b9-47ab43adff5a"}}} +2021/10/27 14:49:15 OK: Constructed OIDC provider for issuer https://api.tmtq6-2hp.it.internal.staging.k8s.ondemand.com +2021/10/27 14:49:15 OK: Validated signature on JWT +2021/10/27 14:49:15 OK: Got valid claims from token! +2021/10/27 14:49:15 Full, validated claims: +&openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://api.tmtq6-2hp.it.internal.staging.k8s.ondemand.com", Subject:"system:serviceaccount:svcaccounts-6145:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1635346754, NotBefore:1635346154, IssuedAt:1635346154, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-6145", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"d0370584-4e01-4a3e-b6b9-47ab43adff5a"}}} + +Oct 27 14:49:46.187: INFO: completed pod +[AfterEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:49:46.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svcaccounts-6145" for this suite. +•{"msg":"PASSED [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","total":346,"completed":197,"skipped":3748,"failed":0} +SSSSSS +------------------------------ +[sig-network] Services + should be able to change the type from ExternalName to NodePort [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:49:46.231: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-9632 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should be able to change the type from ExternalName to NodePort [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a service externalname-service with the type=ExternalName in namespace services-9632 +STEP: changing the ExternalName service to type=NodePort +STEP: creating replication controller externalname-service in namespace services-9632 +I1027 14:49:46.484031 5683 runners.go:190] Created replication controller with name: externalname-service, namespace: services-9632, replica count: 2 +Oct 27 14:49:49.535: INFO: Creating new exec pod +I1027 14:49:49.535787 5683 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Oct 27 14:49:54.605: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-9632 exec execpodzglks -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' +Oct 27 14:49:54.985: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" +Oct 27 14:49:54.985: INFO: stdout: "" +Oct 27 14:49:55.985: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-9632 exec execpodzglks -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' +Oct 27 14:49:56.380: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" +Oct 27 14:49:56.380: INFO: stdout: "externalname-service-zqhpc" +Oct 27 14:49:56.380: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-9632 exec execpodzglks -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.67.198.240 80' +Oct 27 14:49:56.720: INFO: stderr: "+ nc -v -t -w 2 100.67.198.240 80\nConnection to 100.67.198.240 80 port [tcp/http] succeeded!\n+ echo hostName\n" +Oct 27 14:49:56.721: INFO: stdout: "externalname-service-mszrs" +Oct 27 14:49:56.721: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-9632 exec execpodzglks -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.250.0.2 30468' +Oct 27 14:49:57.236: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.250.0.2 30468\nConnection to 10.250.0.2 30468 port [tcp/*] succeeded!\n" +Oct 27 14:49:57.236: INFO: stdout: "externalname-service-zqhpc" +Oct 27 14:49:57.236: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-9632 exec execpodzglks -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.250.0.3 30468' +Oct 27 14:49:57.621: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.250.0.3 30468\nConnection to 10.250.0.3 30468 port [tcp/*] succeeded!\n" +Oct 27 14:49:57.621: INFO: stdout: "externalname-service-zqhpc" +Oct 27 14:49:57.621: INFO: Cleaning up the ExternalName to NodePort test service +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:49:57.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-9632" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":346,"completed":198,"skipped":3754,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Namespaces [Serial] + should ensure that all services are removed when a namespace is deleted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:49:57.680: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename namespaces +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in namespaces-8689 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should ensure that all services are removed when a namespace is deleted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a test namespace +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nsdeletetest-5626 +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Creating a service in the namespace +STEP: Deleting the namespace +STEP: Waiting for the namespace to be removed. +STEP: Recreating the namespace +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nsdeletetest-5360 +STEP: Verifying there is no service in the namespace +[AfterEach] [sig-api-machinery] Namespaces [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:50:04.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "namespaces-8689" for this suite. +STEP: Destroying namespace "nsdeletetest-5626" for this suite. +Oct 27 14:50:04.365: INFO: Namespace nsdeletetest-5626 was already deleted +STEP: Destroying namespace "nsdeletetest-5360" for this suite. +•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":346,"completed":199,"skipped":3791,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook + should execute poststart http hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Container Lifecycle Hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:50:04.378: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-lifecycle-hook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-lifecycle-hook-8764 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] when create a pod with lifecycle hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 +STEP: create the container to handle the HTTPGet hook request. +Oct 27 14:50:04.644: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:50:06.656: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:50:08.657: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) +[It] should execute poststart http hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the pod with lifecycle hook +Oct 27 14:50:08.697: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:50:10.710: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:50:12.710: INFO: The status of Pod pod-with-poststart-http-hook is Running (Ready = true) +STEP: check poststart hook +STEP: delete the pod with lifecycle hook +Oct 27 14:50:12.792: INFO: Waiting for pod pod-with-poststart-http-hook to disappear +Oct 27 14:50:12.804: INFO: Pod pod-with-poststart-http-hook still exists +Oct 27 14:50:14.805: INFO: Waiting for pod pod-with-poststart-http-hook to disappear +Oct 27 14:50:14.817: INFO: Pod pod-with-poststart-http-hook no longer exists +[AfterEach] [sig-node] Container Lifecycle Hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:50:14.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-lifecycle-hook-8764" for this suite. +•{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":346,"completed":200,"skipped":3824,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Probing container + should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:50:14.851: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-probe +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-1796 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 +[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating pod busybox-9090c4f0-823d-405c-b850-ce0d590c6f52 in namespace container-probe-1796 +Oct 27 14:50:17.089: INFO: Started pod busybox-9090c4f0-823d-405c-b850-ce0d590c6f52 in namespace container-probe-1796 +STEP: checking the pod's current state and verifying that restartCount is present +Oct 27 14:50:17.101: INFO: Initial restart count of pod busybox-9090c4f0-823d-405c-b850-ce0d590c6f52 is 0 +Oct 27 14:51:07.436: INFO: Restart count of pod container-probe-1796/busybox-9090c4f0-823d-405c-b850-ce0d590c6f52 is now 1 (50.334829711s elapsed) +STEP: deleting the pod +[AfterEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:51:07.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-1796" for this suite. +•{"msg":"PASSED [sig-node] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":346,"completed":201,"skipped":3855,"failed":0} + +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:51:07.486: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-3122 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0777 on node default medium +Oct 27 14:51:07.697: INFO: Waiting up to 5m0s for pod "pod-105880e5-485b-4843-b520-1eeca41aa04f" in namespace "emptydir-3122" to be "Succeeded or Failed" +Oct 27 14:51:07.709: INFO: Pod "pod-105880e5-485b-4843-b520-1eeca41aa04f": Phase="Pending", Reason="", readiness=false. Elapsed: 11.907988ms +Oct 27 14:51:09.721: INFO: Pod "pod-105880e5-485b-4843-b520-1eeca41aa04f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.023732746s +STEP: Saw pod success +Oct 27 14:51:09.721: INFO: Pod "pod-105880e5-485b-4843-b520-1eeca41aa04f" satisfied condition "Succeeded or Failed" +Oct 27 14:51:09.732: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod pod-105880e5-485b-4843-b520-1eeca41aa04f container test-container: +STEP: delete the pod +Oct 27 14:51:09.806: INFO: Waiting for pod pod-105880e5-485b-4843-b520-1eeca41aa04f to disappear +Oct 27 14:51:09.817: INFO: Pod pod-105880e5-485b-4843-b520-1eeca41aa04f no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:51:09.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-3122" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":202,"skipped":3855,"failed":0} +S +------------------------------ +[sig-node] Security Context When creating a pod with privileged + should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:51:09.850: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename security-context-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-test-6440 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 +[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:51:10.070: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-76a95f43-1731-4ff6-8e2e-852809012c54" in namespace "security-context-test-6440" to be "Succeeded or Failed" +Oct 27 14:51:10.081: INFO: Pod "busybox-privileged-false-76a95f43-1731-4ff6-8e2e-852809012c54": Phase="Pending", Reason="", readiness=false. Elapsed: 11.124056ms +Oct 27 14:51:12.094: INFO: Pod "busybox-privileged-false-76a95f43-1731-4ff6-8e2e-852809012c54": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.023894017s +Oct 27 14:51:12.094: INFO: Pod "busybox-privileged-false-76a95f43-1731-4ff6-8e2e-852809012c54" satisfied condition "Succeeded or Failed" +Oct 27 14:51:12.154: INFO: Got logs for pod "busybox-privileged-false-76a95f43-1731-4ff6-8e2e-852809012c54": "ip: RTNETLINK answers: Operation not permitted\n" +[AfterEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:51:12.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "security-context-test-6440" for this suite. +•{"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":203,"skipped":3856,"failed":0} + +------------------------------ +[sig-network] Services + should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:51:12.189: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-6672 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating service in namespace services-6672 +STEP: creating service affinity-clusterip-transition in namespace services-6672 +STEP: creating replication controller affinity-clusterip-transition in namespace services-6672 +I1027 14:51:12.413235 5683 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-6672, replica count: 3 +I1027 14:51:15.464533 5683 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Oct 27 14:51:15.487: INFO: Creating new exec pod +Oct 27 14:51:18.531: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-6672 exec execpod-affinitydjxr2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' +Oct 27 14:51:18.927: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" +Oct 27 14:51:18.927: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 14:51:18.927: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-6672 exec execpod-affinitydjxr2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.64.148.38 80' +Oct 27 14:51:19.323: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 100.64.148.38 80\nConnection to 100.64.148.38 80 port [tcp/http] succeeded!\n" +Oct 27 14:51:19.323: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 14:51:19.349: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-6672 exec execpod-affinitydjxr2 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://100.64.148.38:80/ ; done' +Oct 27 14:51:19.755: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.148.38:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.148.38:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.148.38:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.148.38:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.148.38:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.148.38:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.148.38:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.148.38:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.148.38:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.148.38:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.148.38:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.148.38:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.148.38:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.148.38:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.148.38:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.148.38:80/\n" +Oct 27 14:51:19.755: INFO: stdout: "\naffinity-clusterip-transition-vmmld\naffinity-clusterip-transition-vmmld\naffinity-clusterip-transition-vmmld\naffinity-clusterip-transition-vmmld\naffinity-clusterip-transition-vmmld\naffinity-clusterip-transition-vmmld\naffinity-clusterip-transition-vmmld\naffinity-clusterip-transition-vmmld\naffinity-clusterip-transition-vmmld\naffinity-clusterip-transition-vmmld\naffinity-clusterip-transition-vmmld\naffinity-clusterip-transition-vmmld\naffinity-clusterip-transition-vmmld\naffinity-clusterip-transition-vmmld\naffinity-clusterip-transition-vmmld\naffinity-clusterip-transition-vmmld" +Oct 27 14:51:19.755: INFO: Received response from host: affinity-clusterip-transition-vmmld +Oct 27 14:51:19.755: INFO: Received response from host: affinity-clusterip-transition-vmmld +Oct 27 14:51:19.755: INFO: Received response from host: affinity-clusterip-transition-vmmld +Oct 27 14:51:19.755: INFO: Received response from host: affinity-clusterip-transition-vmmld +Oct 27 14:51:19.755: INFO: Received response from host: affinity-clusterip-transition-vmmld +Oct 27 14:51:19.755: INFO: Received response from host: affinity-clusterip-transition-vmmld +Oct 27 14:51:19.755: INFO: Received response from host: affinity-clusterip-transition-vmmld +Oct 27 14:51:19.755: INFO: Received response from host: affinity-clusterip-transition-vmmld +Oct 27 14:51:19.755: INFO: Received response from host: affinity-clusterip-transition-vmmld +Oct 27 14:51:19.755: INFO: Received response from host: affinity-clusterip-transition-vmmld +Oct 27 14:51:19.755: INFO: Received response from host: affinity-clusterip-transition-vmmld +Oct 27 14:51:19.755: INFO: Received response from host: affinity-clusterip-transition-vmmld +Oct 27 14:51:19.755: INFO: Received response from host: affinity-clusterip-transition-vmmld +Oct 27 14:51:19.755: INFO: Received response from host: affinity-clusterip-transition-vmmld +Oct 27 14:51:19.755: INFO: Received response from host: affinity-clusterip-transition-vmmld +Oct 27 14:51:19.755: INFO: Received response from host: affinity-clusterip-transition-vmmld +Oct 27 14:51:49.756: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-6672 exec execpod-affinitydjxr2 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://100.64.148.38:80/ ; done' +Oct 27 14:51:50.165: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.148.38:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.148.38:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.148.38:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.148.38:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.148.38:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.148.38:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.148.38:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.148.38:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.148.38:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.148.38:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.148.38:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.148.38:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.148.38:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.148.38:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.148.38:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.148.38:80/\n" +Oct 27 14:51:50.165: INFO: stdout: "\naffinity-clusterip-transition-z7fnm\naffinity-clusterip-transition-vmmld\naffinity-clusterip-transition-452jz\naffinity-clusterip-transition-vmmld\naffinity-clusterip-transition-vmmld\naffinity-clusterip-transition-vmmld\naffinity-clusterip-transition-452jz\naffinity-clusterip-transition-vmmld\naffinity-clusterip-transition-452jz\naffinity-clusterip-transition-vmmld\naffinity-clusterip-transition-vmmld\naffinity-clusterip-transition-vmmld\naffinity-clusterip-transition-vmmld\naffinity-clusterip-transition-z7fnm\naffinity-clusterip-transition-452jz\naffinity-clusterip-transition-z7fnm" +Oct 27 14:51:50.165: INFO: Received response from host: affinity-clusterip-transition-z7fnm +Oct 27 14:51:50.165: INFO: Received response from host: affinity-clusterip-transition-vmmld +Oct 27 14:51:50.165: INFO: Received response from host: affinity-clusterip-transition-452jz +Oct 27 14:51:50.165: INFO: Received response from host: affinity-clusterip-transition-vmmld +Oct 27 14:51:50.165: INFO: Received response from host: affinity-clusterip-transition-vmmld +Oct 27 14:51:50.165: INFO: Received response from host: affinity-clusterip-transition-vmmld +Oct 27 14:51:50.165: INFO: Received response from host: affinity-clusterip-transition-452jz +Oct 27 14:51:50.165: INFO: Received response from host: affinity-clusterip-transition-vmmld +Oct 27 14:51:50.165: INFO: Received response from host: affinity-clusterip-transition-452jz +Oct 27 14:51:50.165: INFO: Received response from host: affinity-clusterip-transition-vmmld +Oct 27 14:51:50.165: INFO: Received response from host: affinity-clusterip-transition-vmmld +Oct 27 14:51:50.165: INFO: Received response from host: affinity-clusterip-transition-vmmld +Oct 27 14:51:50.165: INFO: Received response from host: affinity-clusterip-transition-vmmld +Oct 27 14:51:50.165: INFO: Received response from host: affinity-clusterip-transition-z7fnm +Oct 27 14:51:50.165: INFO: Received response from host: affinity-clusterip-transition-452jz +Oct 27 14:51:50.165: INFO: Received response from host: affinity-clusterip-transition-z7fnm +Oct 27 14:51:50.193: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-6672 exec execpod-affinitydjxr2 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://100.64.148.38:80/ ; done' +Oct 27 14:51:50.613: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.148.38:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.148.38:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.148.38:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.148.38:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.148.38:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.148.38:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.148.38:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.148.38:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.148.38:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.148.38:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.148.38:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.148.38:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.148.38:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.148.38:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.148.38:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.148.38:80/\n" +Oct 27 14:51:50.613: INFO: stdout: "\naffinity-clusterip-transition-z7fnm\naffinity-clusterip-transition-452jz\naffinity-clusterip-transition-vmmld\naffinity-clusterip-transition-vmmld\naffinity-clusterip-transition-z7fnm\naffinity-clusterip-transition-vmmld\naffinity-clusterip-transition-vmmld\naffinity-clusterip-transition-vmmld\naffinity-clusterip-transition-452jz\naffinity-clusterip-transition-452jz\naffinity-clusterip-transition-452jz\naffinity-clusterip-transition-vmmld\naffinity-clusterip-transition-452jz\naffinity-clusterip-transition-452jz\naffinity-clusterip-transition-vmmld\naffinity-clusterip-transition-452jz" +Oct 27 14:51:50.613: INFO: Received response from host: affinity-clusterip-transition-z7fnm +Oct 27 14:51:50.613: INFO: Received response from host: affinity-clusterip-transition-452jz +Oct 27 14:51:50.613: INFO: Received response from host: affinity-clusterip-transition-vmmld +Oct 27 14:51:50.613: INFO: Received response from host: affinity-clusterip-transition-vmmld +Oct 27 14:51:50.613: INFO: Received response from host: affinity-clusterip-transition-z7fnm +Oct 27 14:51:50.613: INFO: Received response from host: affinity-clusterip-transition-vmmld +Oct 27 14:51:50.613: INFO: Received response from host: affinity-clusterip-transition-vmmld +Oct 27 14:51:50.613: INFO: Received response from host: affinity-clusterip-transition-vmmld +Oct 27 14:51:50.613: INFO: Received response from host: affinity-clusterip-transition-452jz +Oct 27 14:51:50.613: INFO: Received response from host: affinity-clusterip-transition-452jz +Oct 27 14:51:50.613: INFO: Received response from host: affinity-clusterip-transition-452jz +Oct 27 14:51:50.613: INFO: Received response from host: affinity-clusterip-transition-vmmld +Oct 27 14:51:50.613: INFO: Received response from host: affinity-clusterip-transition-452jz +Oct 27 14:51:50.613: INFO: Received response from host: affinity-clusterip-transition-452jz +Oct 27 14:51:50.613: INFO: Received response from host: affinity-clusterip-transition-vmmld +Oct 27 14:51:50.613: INFO: Received response from host: affinity-clusterip-transition-452jz +Oct 27 14:52:20.614: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-6672 exec execpod-affinitydjxr2 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://100.64.148.38:80/ ; done' +Oct 27 14:52:21.034: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.148.38:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.148.38:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.148.38:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.148.38:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.148.38:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.148.38:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.148.38:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.148.38:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.148.38:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.148.38:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.148.38:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.148.38:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.148.38:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.148.38:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.148.38:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.148.38:80/\n" +Oct 27 14:52:21.034: INFO: stdout: "\naffinity-clusterip-transition-vmmld\naffinity-clusterip-transition-vmmld\naffinity-clusterip-transition-vmmld\naffinity-clusterip-transition-vmmld\naffinity-clusterip-transition-vmmld\naffinity-clusterip-transition-vmmld\naffinity-clusterip-transition-vmmld\naffinity-clusterip-transition-vmmld\naffinity-clusterip-transition-vmmld\naffinity-clusterip-transition-vmmld\naffinity-clusterip-transition-vmmld\naffinity-clusterip-transition-vmmld\naffinity-clusterip-transition-vmmld\naffinity-clusterip-transition-vmmld\naffinity-clusterip-transition-vmmld\naffinity-clusterip-transition-vmmld" +Oct 27 14:52:21.034: INFO: Received response from host: affinity-clusterip-transition-vmmld +Oct 27 14:52:21.034: INFO: Received response from host: affinity-clusterip-transition-vmmld +Oct 27 14:52:21.034: INFO: Received response from host: affinity-clusterip-transition-vmmld +Oct 27 14:52:21.034: INFO: Received response from host: affinity-clusterip-transition-vmmld +Oct 27 14:52:21.034: INFO: Received response from host: affinity-clusterip-transition-vmmld +Oct 27 14:52:21.034: INFO: Received response from host: affinity-clusterip-transition-vmmld +Oct 27 14:52:21.034: INFO: Received response from host: affinity-clusterip-transition-vmmld +Oct 27 14:52:21.034: INFO: Received response from host: affinity-clusterip-transition-vmmld +Oct 27 14:52:21.034: INFO: Received response from host: affinity-clusterip-transition-vmmld +Oct 27 14:52:21.034: INFO: Received response from host: affinity-clusterip-transition-vmmld +Oct 27 14:52:21.034: INFO: Received response from host: affinity-clusterip-transition-vmmld +Oct 27 14:52:21.034: INFO: Received response from host: affinity-clusterip-transition-vmmld +Oct 27 14:52:21.034: INFO: Received response from host: affinity-clusterip-transition-vmmld +Oct 27 14:52:21.034: INFO: Received response from host: affinity-clusterip-transition-vmmld +Oct 27 14:52:21.034: INFO: Received response from host: affinity-clusterip-transition-vmmld +Oct 27 14:52:21.034: INFO: Received response from host: affinity-clusterip-transition-vmmld +Oct 27 14:52:21.034: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-6672, will wait for the garbage collector to delete the pods +Oct 27 14:52:21.129: INFO: Deleting ReplicationController affinity-clusterip-transition took: 13.452092ms +Oct 27 14:52:21.230: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 101.22732ms +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:52:24.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-6672" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":346,"completed":204,"skipped":3856,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-scheduling] SchedulerPreemption [Serial] + validates lower priority pod preemption by critical pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:52:24.190: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename sched-preemption +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-preemption-2228 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 +Oct 27 14:52:24.436: INFO: Waiting up to 1m0s for all nodes to be ready +Oct 27 14:53:24.684: INFO: Waiting for terminating namespaces to be deleted... +[It] validates lower priority pod preemption by critical pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Create pods that use 4/5 of node resources. +Oct 27 14:53:24.755: INFO: Created pod: pod0-0-sched-preemption-low-priority +Oct 27 14:53:24.777: INFO: Created pod: pod0-1-sched-preemption-medium-priority +Oct 27 14:53:24.836: INFO: Created pod: pod1-0-sched-preemption-medium-priority +Oct 27 14:53:24.854: INFO: Created pod: pod1-1-sched-preemption-medium-priority +STEP: Wait for pods to be scheduled. +STEP: Run a critical pod that use same resources as that of a lower priority pod +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:53:33.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-preemption-2228" for this suite. +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 +•{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":346,"completed":205,"skipped":3904,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-node] Kubelet when scheduling a read only busybox container + should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:53:33.195: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubelet-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubelet-test-3654 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 +[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:53:33.420: INFO: The status of Pod busybox-readonly-fs33868bb4-3959-4a6d-856d-1f81dd98062b is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:53:35.432: INFO: The status of Pod busybox-readonly-fs33868bb4-3959-4a6d-856d-1f81dd98062b is Pending, waiting for it to be Running (with Ready = true) +Oct 27 14:53:37.433: INFO: The status of Pod busybox-readonly-fs33868bb4-3959-4a6d-856d-1f81dd98062b is Running (Ready = true) +[AfterEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:53:37.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubelet-test-3654" for this suite. +•{"msg":"PASSED [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":206,"skipped":3919,"failed":0} +SSS +------------------------------ +[sig-api-machinery] Garbage collector + should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:53:37.535: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename gc +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-7574 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the rc +STEP: delete the rc +STEP: wait for the rc to be deleted +STEP: Gathering metrics +Oct 27 14:53:43.833: INFO: For apiserver_request_total: +For apiserver_request_latency_seconds: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:53:43.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +W1027 14:53:43.833020 5683 metrics_grabber.go:151] Can't find kube-controller-manager pod. Grabbing metrics from kube-controller-manager is disabled. +STEP: Destroying namespace "gc-7574" for this suite. +•{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":346,"completed":207,"skipped":3922,"failed":0} + +------------------------------ +[sig-node] ConfigMap + should be consumable via environment variable [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:53:43.859: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-4659 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable via environment variable [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap configmap-4659/configmap-test-e90f336d-f58d-4cc4-b06e-3c7a78034e96 +STEP: Creating a pod to test consume configMaps +Oct 27 14:53:44.084: INFO: Waiting up to 5m0s for pod "pod-configmaps-e957d78c-ae23-479c-85a4-57191a32ac25" in namespace "configmap-4659" to be "Succeeded or Failed" +Oct 27 14:53:44.096: INFO: Pod "pod-configmaps-e957d78c-ae23-479c-85a4-57191a32ac25": Phase="Pending", Reason="", readiness=false. Elapsed: 12.034301ms +Oct 27 14:53:46.108: INFO: Pod "pod-configmaps-e957d78c-ae23-479c-85a4-57191a32ac25": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024408715s +Oct 27 14:53:48.122: INFO: Pod "pod-configmaps-e957d78c-ae23-479c-85a4-57191a32ac25": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037795282s +STEP: Saw pod success +Oct 27 14:53:48.122: INFO: Pod "pod-configmaps-e957d78c-ae23-479c-85a4-57191a32ac25" satisfied condition "Succeeded or Failed" +Oct 27 14:53:48.133: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod pod-configmaps-e957d78c-ae23-479c-85a4-57191a32ac25 container env-test: +STEP: delete the pod +Oct 27 14:53:48.170: INFO: Waiting for pod pod-configmaps-e957d78c-ae23-479c-85a4-57191a32ac25 to disappear +Oct 27 14:53:48.181: INFO: Pod pod-configmaps-e957d78c-ae23-479c-85a4-57191a32ac25 no longer exists +[AfterEach] [sig-node] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:53:48.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-4659" for this suite. +•{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":346,"completed":208,"skipped":3922,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-node] ConfigMap + should run through a ConfigMap lifecycle [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:53:48.215: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-1724 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should run through a ConfigMap lifecycle [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a ConfigMap +STEP: fetching the ConfigMap +STEP: patching the ConfigMap +STEP: listing all ConfigMaps in all namespaces with a label selector +STEP: deleting the ConfigMap by collection with a label selector +STEP: listing all ConfigMaps in test namespace +[AfterEach] [sig-node] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:53:48.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-1724" for this suite. +•{"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":346,"completed":209,"skipped":3937,"failed":0} +SSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:53:48.532: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-1928 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 27 14:53:49.717: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +Oct 27 14:53:51.755: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943229, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943229, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943229, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943229, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 27 14:53:54.787: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API +STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API +STEP: Creating a dummy validating-webhook-configuration object +STEP: Deleting the validating-webhook-configuration, which should be possible to remove +STEP: Creating a dummy mutating-webhook-configuration object +STEP: Deleting the mutating-webhook-configuration, which should be possible to remove +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:53:55.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-1928" for this suite. +STEP: Destroying namespace "webhook-1928-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":346,"completed":210,"skipped":3944,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Docker Containers + should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Docker Containers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:53:55.248: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename containers +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in containers-1885 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test override command +Oct 27 14:53:55.461: INFO: Waiting up to 5m0s for pod "client-containers-e07a31da-d8c0-4239-b3de-85184f936ed8" in namespace "containers-1885" to be "Succeeded or Failed" +Oct 27 14:53:55.472: INFO: Pod "client-containers-e07a31da-d8c0-4239-b3de-85184f936ed8": Phase="Pending", Reason="", readiness=false. Elapsed: 11.275333ms +Oct 27 14:53:57.485: INFO: Pod "client-containers-e07a31da-d8c0-4239-b3de-85184f936ed8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023539885s +Oct 27 14:53:59.498: INFO: Pod "client-containers-e07a31da-d8c0-4239-b3de-85184f936ed8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037277213s +STEP: Saw pod success +Oct 27 14:53:59.498: INFO: Pod "client-containers-e07a31da-d8c0-4239-b3de-85184f936ed8" satisfied condition "Succeeded or Failed" +Oct 27 14:53:59.510: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod client-containers-e07a31da-d8c0-4239-b3de-85184f936ed8 container agnhost-container: +STEP: delete the pod +Oct 27 14:53:59.548: INFO: Waiting for pod client-containers-e07a31da-d8c0-4239-b3de-85184f936ed8 to disappear +Oct 27 14:53:59.559: INFO: Pod client-containers-e07a31da-d8c0-4239-b3de-85184f936ed8 no longer exists +[AfterEach] [sig-node] Docker Containers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:53:59.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "containers-1885" for this suite. +•{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":346,"completed":211,"skipped":3981,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-node] Downward API + should provide pod UID as env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:53:59.593: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-2616 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide pod UID as env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward api env vars +Oct 27 14:53:59.806: INFO: Waiting up to 5m0s for pod "downward-api-9c305c75-99ce-4d1a-9954-03a0bab2aa2e" in namespace "downward-api-2616" to be "Succeeded or Failed" +Oct 27 14:53:59.817: INFO: Pod "downward-api-9c305c75-99ce-4d1a-9954-03a0bab2aa2e": Phase="Pending", Reason="", readiness=false. Elapsed: 11.196953ms +Oct 27 14:54:01.829: INFO: Pod "downward-api-9c305c75-99ce-4d1a-9954-03a0bab2aa2e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023216353s +Oct 27 14:54:03.842: INFO: Pod "downward-api-9c305c75-99ce-4d1a-9954-03a0bab2aa2e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036185937s +STEP: Saw pod success +Oct 27 14:54:03.842: INFO: Pod "downward-api-9c305c75-99ce-4d1a-9954-03a0bab2aa2e" satisfied condition "Succeeded or Failed" +Oct 27 14:54:03.853: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod downward-api-9c305c75-99ce-4d1a-9954-03a0bab2aa2e container dapi-container: +STEP: delete the pod +Oct 27 14:54:03.893: INFO: Waiting for pod downward-api-9c305c75-99ce-4d1a-9954-03a0bab2aa2e to disappear +Oct 27 14:54:03.904: INFO: Pod downward-api-9c305c75-99ce-4d1a-9954-03a0bab2aa2e no longer exists +[AfterEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:54:03.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-2616" for this suite. +•{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":346,"completed":212,"skipped":3991,"failed":0} + +------------------------------ +[sig-network] DNS + should provide DNS for pods for Subdomain [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:54:03.937: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename dns +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-3343 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide DNS for pods for Subdomain [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a test headless service +STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3343.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-3343.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3343.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3343.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-3343.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-3343.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-3343.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-3343.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3343.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3343.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-3343.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3343.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-3343.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-3343.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-3343.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-3343.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-3343.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3343.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done + +STEP: creating a pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Oct 27 14:54:06.302: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3343.svc.cluster.local from pod dns-3343/dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537: the server could not find the requested resource (get pods dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537) +Oct 27 14:54:06.317: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3343.svc.cluster.local from pod dns-3343/dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537: the server could not find the requested resource (get pods dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537) +Oct 27 14:54:06.332: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3343.svc.cluster.local from pod dns-3343/dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537: the server could not find the requested resource (get pods dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537) +Oct 27 14:54:06.375: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3343.svc.cluster.local from pod dns-3343/dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537: the server could not find the requested resource (get pods dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537) +Oct 27 14:54:06.418: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3343.svc.cluster.local from pod dns-3343/dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537: the server could not find the requested resource (get pods dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537) +Oct 27 14:54:06.433: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3343.svc.cluster.local from pod dns-3343/dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537: the server could not find the requested resource (get pods dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537) +Oct 27 14:54:06.447: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3343.svc.cluster.local from pod dns-3343/dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537: the server could not find the requested resource (get pods dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537) +Oct 27 14:54:06.467: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3343.svc.cluster.local from pod dns-3343/dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537: the server could not find the requested resource (get pods dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537) +Oct 27 14:54:06.502: INFO: Lookups using dns-3343/dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3343.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3343.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3343.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3343.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3343.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3343.svc.cluster.local jessie_udp@dns-test-service-2.dns-3343.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3343.svc.cluster.local] + +Oct 27 14:54:11.518: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3343.svc.cluster.local from pod dns-3343/dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537: the server could not find the requested resource (get pods dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537) +Oct 27 14:54:11.563: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3343.svc.cluster.local from pod dns-3343/dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537: the server could not find the requested resource (get pods dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537) +Oct 27 14:54:11.578: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3343.svc.cluster.local from pod dns-3343/dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537: the server could not find the requested resource (get pods dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537) +Oct 27 14:54:11.592: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3343.svc.cluster.local from pod dns-3343/dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537: the server could not find the requested resource (get pods dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537) +Oct 27 14:54:11.664: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3343.svc.cluster.local from pod dns-3343/dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537: the server could not find the requested resource (get pods dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537) +Oct 27 14:54:11.678: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3343.svc.cluster.local from pod dns-3343/dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537: the server could not find the requested resource (get pods dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537) +Oct 27 14:54:11.692: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3343.svc.cluster.local from pod dns-3343/dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537: the server could not find the requested resource (get pods dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537) +Oct 27 14:54:11.708: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3343.svc.cluster.local from pod dns-3343/dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537: the server could not find the requested resource (get pods dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537) +Oct 27 14:54:11.736: INFO: Lookups using dns-3343/dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3343.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3343.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3343.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3343.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3343.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3343.svc.cluster.local jessie_udp@dns-test-service-2.dns-3343.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3343.svc.cluster.local] + +Oct 27 14:54:16.520: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3343.svc.cluster.local from pod dns-3343/dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537: the server could not find the requested resource (get pods dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537) +Oct 27 14:54:16.563: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3343.svc.cluster.local from pod dns-3343/dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537: the server could not find the requested resource (get pods dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537) +Oct 27 14:54:16.577: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3343.svc.cluster.local from pod dns-3343/dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537: the server could not find the requested resource (get pods dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537) +Oct 27 14:54:16.591: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3343.svc.cluster.local from pod dns-3343/dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537: the server could not find the requested resource (get pods dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537) +Oct 27 14:54:16.665: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3343.svc.cluster.local from pod dns-3343/dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537: the server could not find the requested resource (get pods dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537) +Oct 27 14:54:16.679: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3343.svc.cluster.local from pod dns-3343/dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537: the server could not find the requested resource (get pods dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537) +Oct 27 14:54:16.694: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3343.svc.cluster.local from pod dns-3343/dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537: the server could not find the requested resource (get pods dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537) +Oct 27 14:54:16.709: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3343.svc.cluster.local from pod dns-3343/dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537: the server could not find the requested resource (get pods dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537) +Oct 27 14:54:16.738: INFO: Lookups using dns-3343/dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3343.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3343.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3343.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3343.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3343.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3343.svc.cluster.local jessie_udp@dns-test-service-2.dns-3343.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3343.svc.cluster.local] + +Oct 27 14:54:21.530: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3343.svc.cluster.local from pod dns-3343/dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537: the server could not find the requested resource (get pods dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537) +Oct 27 14:54:21.575: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3343.svc.cluster.local from pod dns-3343/dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537: the server could not find the requested resource (get pods dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537) +Oct 27 14:54:21.591: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3343.svc.cluster.local from pod dns-3343/dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537: the server could not find the requested resource (get pods dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537) +Oct 27 14:54:21.606: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3343.svc.cluster.local from pod dns-3343/dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537: the server could not find the requested resource (get pods dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537) +Oct 27 14:54:21.651: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3343.svc.cluster.local from pod dns-3343/dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537: the server could not find the requested resource (get pods dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537) +Oct 27 14:54:21.667: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3343.svc.cluster.local from pod dns-3343/dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537: the server could not find the requested resource (get pods dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537) +Oct 27 14:54:21.681: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3343.svc.cluster.local from pod dns-3343/dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537: the server could not find the requested resource (get pods dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537) +Oct 27 14:54:21.695: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3343.svc.cluster.local from pod dns-3343/dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537: the server could not find the requested resource (get pods dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537) +Oct 27 14:54:21.725: INFO: Lookups using dns-3343/dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3343.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3343.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3343.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3343.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3343.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3343.svc.cluster.local jessie_udp@dns-test-service-2.dns-3343.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3343.svc.cluster.local] + +Oct 27 14:54:26.521: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3343.svc.cluster.local from pod dns-3343/dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537: the server could not find the requested resource (get pods dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537) +Oct 27 14:54:26.536: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3343.svc.cluster.local from pod dns-3343/dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537: the server could not find the requested resource (get pods dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537) +Oct 27 14:54:26.551: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3343.svc.cluster.local from pod dns-3343/dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537: the server could not find the requested resource (get pods dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537) +Oct 27 14:54:26.595: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3343.svc.cluster.local from pod dns-3343/dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537: the server could not find the requested resource (get pods dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537) +Oct 27 14:54:26.638: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3343.svc.cluster.local from pod dns-3343/dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537: the server could not find the requested resource (get pods dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537) +Oct 27 14:54:26.653: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3343.svc.cluster.local from pod dns-3343/dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537: the server could not find the requested resource (get pods dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537) +Oct 27 14:54:26.668: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3343.svc.cluster.local from pod dns-3343/dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537: the server could not find the requested resource (get pods dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537) +Oct 27 14:54:26.682: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3343.svc.cluster.local from pod dns-3343/dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537: the server could not find the requested resource (get pods dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537) +Oct 27 14:54:26.711: INFO: Lookups using dns-3343/dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3343.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3343.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3343.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3343.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3343.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3343.svc.cluster.local jessie_udp@dns-test-service-2.dns-3343.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3343.svc.cluster.local] + +Oct 27 14:54:31.533: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3343.svc.cluster.local from pod dns-3343/dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537: the server could not find the requested resource (get pods dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537) +Oct 27 14:54:31.547: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3343.svc.cluster.local from pod dns-3343/dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537: the server could not find the requested resource (get pods dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537) +Oct 27 14:54:31.562: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3343.svc.cluster.local from pod dns-3343/dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537: the server could not find the requested resource (get pods dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537) +Oct 27 14:54:31.628: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3343.svc.cluster.local from pod dns-3343/dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537: the server could not find the requested resource (get pods dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537) +Oct 27 14:54:31.671: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3343.svc.cluster.local from pod dns-3343/dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537: the server could not find the requested resource (get pods dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537) +Oct 27 14:54:31.686: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3343.svc.cluster.local from pod dns-3343/dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537: the server could not find the requested resource (get pods dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537) +Oct 27 14:54:31.701: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3343.svc.cluster.local from pod dns-3343/dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537: the server could not find the requested resource (get pods dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537) +Oct 27 14:54:31.717: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3343.svc.cluster.local from pod dns-3343/dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537: the server could not find the requested resource (get pods dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537) +Oct 27 14:54:31.746: INFO: Lookups using dns-3343/dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3343.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3343.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3343.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3343.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3343.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3343.svc.cluster.local jessie_udp@dns-test-service-2.dns-3343.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3343.svc.cluster.local] + +Oct 27 14:54:36.574: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3343.svc.cluster.local from pod dns-3343/dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537: the server could not find the requested resource (get pods dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537) +Oct 27 14:54:36.593: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3343.svc.cluster.local from pod dns-3343/dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537: the server could not find the requested resource (get pods dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537) +Oct 27 14:54:36.663: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3343.svc.cluster.local from pod dns-3343/dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537: the server could not find the requested resource (get pods dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537) +Oct 27 14:54:36.677: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3343.svc.cluster.local from pod dns-3343/dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537: the server could not find the requested resource (get pods dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537) +Oct 27 14:54:36.705: INFO: Lookups using dns-3343/dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537 failed for: [wheezy_udp@dns-test-service-2.dns-3343.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3343.svc.cluster.local jessie_udp@dns-test-service-2.dns-3343.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3343.svc.cluster.local] + +Oct 27 14:54:41.709: INFO: DNS probes using dns-3343/dns-test-23bf550d-2047-4bf7-b1de-f83f88fac537 succeeded + +STEP: deleting the pod +STEP: deleting the test headless service +[AfterEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:54:41.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-3343" for this suite. +•{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":346,"completed":213,"skipped":3991,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Security Context when creating containers with AllowPrivilegeEscalation + should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:54:41.832: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename security-context-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-test-9120 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 +[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 14:54:42.055: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-6143ea0a-9dc1-4d8a-830f-38201ca96909" in namespace "security-context-test-9120" to be "Succeeded or Failed" +Oct 27 14:54:42.068: INFO: Pod "alpine-nnp-false-6143ea0a-9dc1-4d8a-830f-38201ca96909": Phase="Pending", Reason="", readiness=false. Elapsed: 12.341395ms +Oct 27 14:54:44.080: INFO: Pod "alpine-nnp-false-6143ea0a-9dc1-4d8a-830f-38201ca96909": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025004555s +Oct 27 14:54:46.092: INFO: Pod "alpine-nnp-false-6143ea0a-9dc1-4d8a-830f-38201ca96909": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037295559s +Oct 27 14:54:46.093: INFO: Pod "alpine-nnp-false-6143ea0a-9dc1-4d8a-830f-38201ca96909" satisfied condition "Succeeded or Failed" +[AfterEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 14:54:46.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "security-context-test-9120" for this suite. +•{"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":214,"skipped":4007,"failed":0} +S +------------------------------ +[sig-apps] CronJob + should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 14:54:46.184: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename cronjob +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in cronjob-6831 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a ForbidConcurrent cronjob +STEP: Ensuring a job is scheduled +STEP: Ensuring exactly one is scheduled +STEP: Ensuring exactly one running job exists by listing jobs explicitly +STEP: Ensuring no more jobs are scheduled +STEP: Removing cronjob +[AfterEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:00:00.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "cronjob-6831" for this suite. + +• [SLOW TEST:314.327 seconds] +[sig-apps] CronJob +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","total":346,"completed":215,"skipped":4008,"failed":0} +S +------------------------------ +[sig-node] Variable Expansion + should allow substituting values in a volume subpath [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:00:00.511: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename var-expansion +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in var-expansion-6588 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should allow substituting values in a volume subpath [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test substitution in volume subpath +Oct 27 15:00:00.748: INFO: Waiting up to 5m0s for pod "var-expansion-237c5c8a-f51f-48ae-86a5-9372dfc25b54" in namespace "var-expansion-6588" to be "Succeeded or Failed" +Oct 27 15:00:00.761: INFO: Pod "var-expansion-237c5c8a-f51f-48ae-86a5-9372dfc25b54": Phase="Pending", Reason="", readiness=false. Elapsed: 12.985539ms +Oct 27 15:00:02.775: INFO: Pod "var-expansion-237c5c8a-f51f-48ae-86a5-9372dfc25b54": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027209202s +Oct 27 15:00:04.789: INFO: Pod "var-expansion-237c5c8a-f51f-48ae-86a5-9372dfc25b54": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041024812s +STEP: Saw pod success +Oct 27 15:00:04.789: INFO: Pod "var-expansion-237c5c8a-f51f-48ae-86a5-9372dfc25b54" satisfied condition "Succeeded or Failed" +Oct 27 15:00:04.800: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod var-expansion-237c5c8a-f51f-48ae-86a5-9372dfc25b54 container dapi-container: +STEP: delete the pod +Oct 27 15:00:04.875: INFO: Waiting for pod var-expansion-237c5c8a-f51f-48ae-86a5-9372dfc25b54 to disappear +Oct 27 15:00:04.886: INFO: Pod var-expansion-237c5c8a-f51f-48ae-86a5-9372dfc25b54 no longer exists +[AfterEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:00:04.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-6588" for this suite. +•{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]","total":346,"completed":216,"skipped":4009,"failed":0} +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Deployment + RollingUpdateDeployment should delete old pods and create new ones [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:00:04.920: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename deployment +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in deployment-9628 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 +[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:00:05.117: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) +Oct 27 15:00:05.142: INFO: Pod name sample-pod: Found 0 pods out of 1 +Oct 27 15:00:10.158: INFO: Pod name sample-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running +Oct 27 15:00:10.158: INFO: Creating deployment "test-rolling-update-deployment" +Oct 27 15:00:10.170: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has +Oct 27 15:00:10.193: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created +Oct 27 15:00:12.219: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected +Oct 27 15:00:12.230: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943610, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943610, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943610, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943610, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-585b757574\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 27 15:00:14.244: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 +Oct 27 15:00:14.279: INFO: Deployment "test-rolling-update-deployment": +&Deployment{ObjectMeta:{test-rolling-update-deployment deployment-9628 3cb17720-d420-4154-8742-188ca19e3893 29322 1 2021-10-27 15:00:10 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2021-10-27 15:00:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 15:00:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005e72d88 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-10-27 15:00:10 +0000 UTC,LastTransitionTime:2021-10-27 15:00:10 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-585b757574" has successfully progressed.,LastUpdateTime:2021-10-27 15:00:12 +0000 UTC,LastTransitionTime:2021-10-27 15:00:10 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} + +Oct 27 15:00:14.291: INFO: New ReplicaSet "test-rolling-update-deployment-585b757574" of Deployment "test-rolling-update-deployment": +&ReplicaSet{ObjectMeta:{test-rolling-update-deployment-585b757574 deployment-9628 d9239615-46ea-4030-a708-4db1397c536b 29315 1 2021-10-27 15:00:10 +0000 UTC map[name:sample-pod pod-template-hash:585b757574] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 3cb17720-d420-4154-8742-188ca19e3893 0xc005e73277 0xc005e73278}] [] [{kube-controller-manager Update apps/v1 2021-10-27 15:00:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3cb17720-d420-4154-8742-188ca19e3893\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 15:00:12 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 585b757574,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:585b757574] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005e73338 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} +Oct 27 15:00:14.291: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": +Oct 27 15:00:14.291: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-9628 ec07486d-4100-4f54-a5e9-f8da0a1f9c82 29321 2 2021-10-27 15:00:05 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 3cb17720-d420-4154-8742-188ca19e3893 0xc005e73147 0xc005e73148}] [] [{e2e.test Update apps/v1 2021-10-27 15:00:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 15:00:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3cb17720-d420-4154-8742-188ca19e3893\"}":{}}},"f:spec":{"f:replicas":{}}} } {kube-controller-manager Update apps/v1 2021-10-27 15:00:12 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc005e73208 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Oct 27 15:00:14.303: INFO: Pod "test-rolling-update-deployment-585b757574-m89qk" is available: +&Pod{ObjectMeta:{test-rolling-update-deployment-585b757574-m89qk test-rolling-update-deployment-585b757574- deployment-9628 4f26585e-db4a-49f0-8e26-45208d43d0f8 29314 0 2021-10-27 15:00:10 +0000 UTC map[name:sample-pod pod-template-hash:585b757574] map[cni.projectcalico.org/containerID:6a933e95845e5d8964f42a56a81429f66f13885532657dcc685fea5713bbd2cc cni.projectcalico.org/podIP:100.96.1.225/32 cni.projectcalico.org/podIPs:100.96.1.225/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet test-rolling-update-deployment-585b757574 d9239615-46ea-4030-a708-4db1397c536b 0xc005e73807 0xc005e73808}] [] [{kube-controller-manager Update v1 2021-10-27 15:00:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9239615-46ea-4030-a708-4db1397c536b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-27 15:00:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-27 15:00:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.1.225\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-kpl6c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmtq6-2hp.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kpl6c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:00:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:00:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:00:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:00:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.3,PodIP:100.96.1.225,StartTime:2021-10-27 15:00:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-27 15:00:11 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:docker://b150e4665d38624326db9e57a1be2ced02c445e975b86e5231d7dbfabd6e3ed6,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.1.225,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:00:14.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-9628" for this suite. +•{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":346,"completed":217,"skipped":4030,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:00:14.339: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-7241 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0777 on tmpfs +Oct 27 15:00:14.549: INFO: Waiting up to 5m0s for pod "pod-45cb0ce3-f641-429e-bbb8-f31ec1aed71e" in namespace "emptydir-7241" to be "Succeeded or Failed" +Oct 27 15:00:14.560: INFO: Pod "pod-45cb0ce3-f641-429e-bbb8-f31ec1aed71e": Phase="Pending", Reason="", readiness=false. Elapsed: 11.13105ms +Oct 27 15:00:16.573: INFO: Pod "pod-45cb0ce3-f641-429e-bbb8-f31ec1aed71e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.023714678s +STEP: Saw pod success +Oct 27 15:00:16.573: INFO: Pod "pod-45cb0ce3-f641-429e-bbb8-f31ec1aed71e" satisfied condition "Succeeded or Failed" +Oct 27 15:00:16.584: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod pod-45cb0ce3-f641-429e-bbb8-f31ec1aed71e container test-container: +STEP: delete the pod +Oct 27 15:00:16.619: INFO: Waiting for pod pod-45cb0ce3-f641-429e-bbb8-f31ec1aed71e to disappear +Oct 27 15:00:16.630: INFO: Pod pod-45cb0ce3-f641-429e-bbb8-f31ec1aed71e no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:00:16.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-7241" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":218,"skipped":4083,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-storage] Secrets + should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:00:16.664: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-5376 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating secret with name secret-test-c6ba142e-2e10-4c9f-a965-730149e225ad +STEP: Creating a pod to test consume secrets +Oct 27 15:00:16.887: INFO: Waiting up to 5m0s for pod "pod-secrets-a284a29b-54f7-4c1a-bbcd-e7ce08bef774" in namespace "secrets-5376" to be "Succeeded or Failed" +Oct 27 15:00:16.898: INFO: Pod "pod-secrets-a284a29b-54f7-4c1a-bbcd-e7ce08bef774": Phase="Pending", Reason="", readiness=false. Elapsed: 11.256601ms +Oct 27 15:00:18.911: INFO: Pod "pod-secrets-a284a29b-54f7-4c1a-bbcd-e7ce08bef774": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.024481752s +STEP: Saw pod success +Oct 27 15:00:18.911: INFO: Pod "pod-secrets-a284a29b-54f7-4c1a-bbcd-e7ce08bef774" satisfied condition "Succeeded or Failed" +Oct 27 15:00:18.922: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod pod-secrets-a284a29b-54f7-4c1a-bbcd-e7ce08bef774 container secret-volume-test: +STEP: delete the pod +Oct 27 15:00:19.000: INFO: Waiting for pod pod-secrets-a284a29b-54f7-4c1a-bbcd-e7ce08bef774 to disappear +Oct 27 15:00:19.011: INFO: Pod pod-secrets-a284a29b-54f7-4c1a-bbcd-e7ce08bef774 no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:00:19.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-5376" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":346,"completed":219,"skipped":4094,"failed":0} + +------------------------------ +[sig-storage] Downward API volume + should update labels on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:00:19.046: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-8710 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should update labels on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating the pod +Oct 27 15:00:19.268: INFO: The status of Pod labelsupdate66e1da6c-588a-44ae-97ed-5bf17c9c349f is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:00:21.281: INFO: The status of Pod labelsupdate66e1da6c-588a-44ae-97ed-5bf17c9c349f is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:00:23.331: INFO: The status of Pod labelsupdate66e1da6c-588a-44ae-97ed-5bf17c9c349f is Running (Ready = true) +Oct 27 15:00:23.967: INFO: Successfully updated pod "labelsupdate66e1da6c-588a-44ae-97ed-5bf17c9c349f" +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:00:26.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-8710" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":346,"completed":220,"skipped":4094,"failed":0} +S +------------------------------ +[sig-storage] Downward API volume + should provide podname only [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:00:26.050: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-8647 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should provide podname only [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 27 15:00:26.334: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d6636a4f-aa75-4ce2-a1dd-e89ae83d4632" in namespace "downward-api-8647" to be "Succeeded or Failed" +Oct 27 15:00:26.346: INFO: Pod "downwardapi-volume-d6636a4f-aa75-4ce2-a1dd-e89ae83d4632": Phase="Pending", Reason="", readiness=false. Elapsed: 12.228161ms +Oct 27 15:00:28.359: INFO: Pod "downwardapi-volume-d6636a4f-aa75-4ce2-a1dd-e89ae83d4632": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.024986398s +STEP: Saw pod success +Oct 27 15:00:28.359: INFO: Pod "downwardapi-volume-d6636a4f-aa75-4ce2-a1dd-e89ae83d4632" satisfied condition "Succeeded or Failed" +Oct 27 15:00:28.371: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod downwardapi-volume-d6636a4f-aa75-4ce2-a1dd-e89ae83d4632 container client-container: +STEP: delete the pod +Oct 27 15:00:28.408: INFO: Waiting for pod downwardapi-volume-d6636a4f-aa75-4ce2-a1dd-e89ae83d4632 to disappear +Oct 27 15:00:28.419: INFO: Pod downwardapi-volume-d6636a4f-aa75-4ce2-a1dd-e89ae83d4632 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:00:28.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-8647" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":346,"completed":221,"skipped":4095,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:00:28.455: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-3037 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0666 on node default medium +Oct 27 15:00:28.672: INFO: Waiting up to 5m0s for pod "pod-c8085dca-15f8-4fe3-83d3-c608715d502d" in namespace "emptydir-3037" to be "Succeeded or Failed" +Oct 27 15:00:28.683: INFO: Pod "pod-c8085dca-15f8-4fe3-83d3-c608715d502d": Phase="Pending", Reason="", readiness=false. Elapsed: 11.532822ms +Oct 27 15:00:30.696: INFO: Pod "pod-c8085dca-15f8-4fe3-83d3-c608715d502d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023921983s +Oct 27 15:00:32.708: INFO: Pod "pod-c8085dca-15f8-4fe3-83d3-c608715d502d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036537174s +STEP: Saw pod success +Oct 27 15:00:32.709: INFO: Pod "pod-c8085dca-15f8-4fe3-83d3-c608715d502d" satisfied condition "Succeeded or Failed" +Oct 27 15:00:32.720: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod pod-c8085dca-15f8-4fe3-83d3-c608715d502d container test-container: +STEP: delete the pod +Oct 27 15:00:32.796: INFO: Waiting for pod pod-c8085dca-15f8-4fe3-83d3-c608715d502d to disappear +Oct 27 15:00:32.807: INFO: Pod pod-c8085dca-15f8-4fe3-83d3-c608715d502d no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:00:32.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-3037" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":222,"skipped":4133,"failed":0} +SSSSSS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with downward pod [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:00:32.843: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename subpath +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in subpath-8947 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] Atomic writer volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 +STEP: Setting up data +[It] should support subpaths with downward pod [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating pod pod-subpath-test-downwardapi-xxl6 +STEP: Creating a pod to test atomic-volume-subpath +Oct 27 15:00:33.081: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-xxl6" in namespace "subpath-8947" to be "Succeeded or Failed" +Oct 27 15:00:33.093: INFO: Pod "pod-subpath-test-downwardapi-xxl6": Phase="Pending", Reason="", readiness=false. Elapsed: 11.638368ms +Oct 27 15:00:35.105: INFO: Pod "pod-subpath-test-downwardapi-xxl6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023580406s +Oct 27 15:00:37.117: INFO: Pod "pod-subpath-test-downwardapi-xxl6": Phase="Running", Reason="", readiness=true. Elapsed: 4.035433696s +Oct 27 15:00:39.130: INFO: Pod "pod-subpath-test-downwardapi-xxl6": Phase="Running", Reason="", readiness=true. Elapsed: 6.048723488s +Oct 27 15:00:41.143: INFO: Pod "pod-subpath-test-downwardapi-xxl6": Phase="Running", Reason="", readiness=true. Elapsed: 8.062076124s +Oct 27 15:00:43.157: INFO: Pod "pod-subpath-test-downwardapi-xxl6": Phase="Running", Reason="", readiness=true. Elapsed: 10.075697671s +Oct 27 15:00:45.170: INFO: Pod "pod-subpath-test-downwardapi-xxl6": Phase="Running", Reason="", readiness=true. Elapsed: 12.089017332s +Oct 27 15:00:47.183: INFO: Pod "pod-subpath-test-downwardapi-xxl6": Phase="Running", Reason="", readiness=true. Elapsed: 14.101523675s +Oct 27 15:00:49.196: INFO: Pod "pod-subpath-test-downwardapi-xxl6": Phase="Running", Reason="", readiness=true. Elapsed: 16.11458862s +Oct 27 15:00:51.208: INFO: Pod "pod-subpath-test-downwardapi-xxl6": Phase="Running", Reason="", readiness=true. Elapsed: 18.127258019s +Oct 27 15:00:53.221: INFO: Pod "pod-subpath-test-downwardapi-xxl6": Phase="Running", Reason="", readiness=true. Elapsed: 20.139454586s +Oct 27 15:00:55.234: INFO: Pod "pod-subpath-test-downwardapi-xxl6": Phase="Running", Reason="", readiness=true. Elapsed: 22.152932604s +Oct 27 15:00:57.246: INFO: Pod "pod-subpath-test-downwardapi-xxl6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.165098088s +STEP: Saw pod success +Oct 27 15:00:57.246: INFO: Pod "pod-subpath-test-downwardapi-xxl6" satisfied condition "Succeeded or Failed" +Oct 27 15:00:57.258: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod pod-subpath-test-downwardapi-xxl6 container test-container-subpath-downwardapi-xxl6: +STEP: delete the pod +Oct 27 15:00:57.299: INFO: Waiting for pod pod-subpath-test-downwardapi-xxl6 to disappear +Oct 27 15:00:57.310: INFO: Pod pod-subpath-test-downwardapi-xxl6 no longer exists +STEP: Deleting pod pod-subpath-test-downwardapi-xxl6 +Oct 27 15:00:57.310: INFO: Deleting pod "pod-subpath-test-downwardapi-xxl6" in namespace "subpath-8947" +[AfterEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:00:57.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "subpath-8947" for this suite. +•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":346,"completed":223,"skipped":4139,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] CronJob + should schedule multiple jobs concurrently [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:00:57.357: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename cronjob +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in cronjob-2504 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should schedule multiple jobs concurrently [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a cronjob +STEP: Ensuring more than one job is running at a time +STEP: Ensuring at least two running jobs exists by listing jobs explicitly +STEP: Removing cronjob +[AfterEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:02:01.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "cronjob-2504" for this suite. +•{"msg":"PASSED [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","total":346,"completed":224,"skipped":4166,"failed":0} +SS +------------------------------ +[sig-network] Networking Granular Checks: Pods + should function for intra-pod communication: udp [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Networking + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:02:01.639: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename pod-network-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pod-network-test-5572 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should function for intra-pod communication: udp [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Performing setup for networking test in namespace pod-network-test-5572 +STEP: creating a selector +STEP: Creating the service pods in kubernetes +Oct 27 15:02:01.836: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +Oct 27 15:02:01.979: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:02:03.991: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:02:05.992: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 15:02:07.992: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 15:02:09.994: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 15:02:11.990: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 15:02:13.991: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 15:02:15.992: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 15:02:17.991: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 15:02:19.993: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 15:02:22.003: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 27 15:02:23.992: INFO: The status of Pod netserver-0 is Running (Ready = true) +Oct 27 15:02:24.017: INFO: The status of Pod netserver-1 is Running (Ready = true) +STEP: Creating test pods +Oct 27 15:02:26.085: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 +Oct 27 15:02:26.085: INFO: Breadth first check of 100.96.0.94 on host 10.250.0.2... +Oct 27 15:02:26.096: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://100.96.1.235:9080/dial?request=hostname&protocol=udp&host=100.96.0.94&port=8081&tries=1'] Namespace:pod-network-test-5572 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 15:02:26.096: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 15:02:26.423: INFO: Waiting for responses: map[] +Oct 27 15:02:26.423: INFO: reached 100.96.0.94 after 0/1 tries +Oct 27 15:02:26.423: INFO: Breadth first check of 100.96.1.234 on host 10.250.0.3... +Oct 27 15:02:26.435: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://100.96.1.235:9080/dial?request=hostname&protocol=udp&host=100.96.1.234&port=8081&tries=1'] Namespace:pod-network-test-5572 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 15:02:26.435: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 15:02:26.732: INFO: Waiting for responses: map[] +Oct 27 15:02:26.732: INFO: reached 100.96.1.234 after 0/1 tries +Oct 27 15:02:26.732: INFO: Going to retry 0 out of 2 pods.... +[AfterEach] [sig-network] Networking + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:02:26.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pod-network-test-5572" for this suite. +•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":346,"completed":225,"skipped":4168,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Secrets + should be immutable if `immutable` field is set [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:02:26.768: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-5220 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be immutable if `immutable` field is set [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:02:27.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-5220" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]","total":346,"completed":226,"skipped":4209,"failed":0} +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for CRD preserving unknown fields at the schema root [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:02:27.093: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-812 +STEP: Waiting for a default service account to be provisioned in namespace +[It] works for CRD preserving unknown fields at the schema root [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:02:27.283: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: client-side validation (kubectl create and apply) allows request with any unknown properties +Oct 27 15:02:31.971: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-812 --namespace=crd-publish-openapi-812 create -f -' +Oct 27 15:02:32.833: INFO: stderr: "" +Oct 27 15:02:32.833: INFO: stdout: "e2e-test-crd-publish-openapi-7463-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" +Oct 27 15:02:32.833: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-812 --namespace=crd-publish-openapi-812 delete e2e-test-crd-publish-openapi-7463-crds test-cr' +Oct 27 15:02:32.960: INFO: stderr: "" +Oct 27 15:02:32.960: INFO: stdout: "e2e-test-crd-publish-openapi-7463-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" +Oct 27 15:02:32.960: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-812 --namespace=crd-publish-openapi-812 apply -f -' +Oct 27 15:02:33.186: INFO: stderr: "" +Oct 27 15:02:33.186: INFO: stdout: "e2e-test-crd-publish-openapi-7463-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" +Oct 27 15:02:33.186: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-812 --namespace=crd-publish-openapi-812 delete e2e-test-crd-publish-openapi-7463-crds test-cr' +Oct 27 15:02:33.287: INFO: stderr: "" +Oct 27 15:02:33.287: INFO: stdout: "e2e-test-crd-publish-openapi-7463-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" +STEP: kubectl explain works to explain CR +Oct 27 15:02:33.287: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-812 explain e2e-test-crd-publish-openapi-7463-crds' +Oct 27 15:02:33.462: INFO: stderr: "" +Oct 27 15:02:33.462: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-7463-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:02:40.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-812" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":346,"completed":227,"skipped":4226,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should mutate custom resource with pruning [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:02:40.209: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-1596 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 27 15:02:40.841: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943760, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943760, loc:(*time.Location)(0xa09bc80)}}, Reason:"NewReplicaSetCreated", Message:"Created new replica set \"sample-webhook-deployment-78988fc6cd\""}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943760, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943760, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} +Oct 27 15:02:42.856: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943760, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943760, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943760, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943760, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 27 15:02:45.872: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should mutate custom resource with pruning [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:02:45.884: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Registering the mutating webhook for custom resource e2e-test-webhook-7514-crds.webhook.example.com via the AdmissionRegistration API +STEP: Creating a custom resource that should be mutated by the webhook +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:02:48.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-1596" for this suite. +STEP: Destroying namespace "webhook-1596-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":346,"completed":228,"skipped":4236,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + updates the published spec when one version gets renamed [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:02:49.159: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-8710 +STEP: Waiting for a default service account to be provisioned in namespace +[It] updates the published spec when one version gets renamed [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: set up a multi version CRD +Oct 27 15:02:49.472: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: rename a version +STEP: check the new version name is served +STEP: check the old version name is removed +STEP: check the other version is not changed +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:03:10.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-8710" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":346,"completed":229,"skipped":4277,"failed":0} +SSSSSS +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:03:10.552: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-9639 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating secret with name secret-test-3d1fa318-854f-4eea-b981-e9d34101a51f +STEP: Creating a pod to test consume secrets +Oct 27 15:03:10.784: INFO: Waiting up to 5m0s for pod "pod-secrets-cca9b517-9db2-4a57-a7c4-e408e771bdc1" in namespace "secrets-9639" to be "Succeeded or Failed" +Oct 27 15:03:10.802: INFO: Pod "pod-secrets-cca9b517-9db2-4a57-a7c4-e408e771bdc1": Phase="Pending", Reason="", readiness=false. Elapsed: 17.983255ms +Oct 27 15:03:12.832: INFO: Pod "pod-secrets-cca9b517-9db2-4a57-a7c4-e408e771bdc1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04845145s +Oct 27 15:03:14.845: INFO: Pod "pod-secrets-cca9b517-9db2-4a57-a7c4-e408e771bdc1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.061560992s +STEP: Saw pod success +Oct 27 15:03:14.845: INFO: Pod "pod-secrets-cca9b517-9db2-4a57-a7c4-e408e771bdc1" satisfied condition "Succeeded or Failed" +Oct 27 15:03:14.857: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod pod-secrets-cca9b517-9db2-4a57-a7c4-e408e771bdc1 container secret-volume-test: +STEP: delete the pod +Oct 27 15:03:14.930: INFO: Waiting for pod pod-secrets-cca9b517-9db2-4a57-a7c4-e408e771bdc1 to disappear +Oct 27 15:03:14.941: INFO: Pod pod-secrets-cca9b517-9db2-4a57-a7c4-e408e771bdc1 no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:03:14.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-9639" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":346,"completed":230,"skipped":4283,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicaSet + should adopt matching pods on creation and release no longer matching pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:03:14.975: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename replicaset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replicaset-3411 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should adopt matching pods on creation and release no longer matching pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Given a Pod with a 'name' label pod-adoption-release is created +Oct 27 15:03:15.230: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:03:17.243: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:03:19.245: INFO: The status of Pod pod-adoption-release is Running (Ready = true) +STEP: When a replicaset with a matching selector is created +STEP: Then the orphan pod is adopted +STEP: When the matched label of one of its pods change +Oct 27 15:03:20.304: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 +STEP: Then the pod is released +[AfterEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:03:20.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replicaset-3411" for this suite. +•{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":346,"completed":231,"skipped":4307,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-network] Services + should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:03:20.439: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-4021 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating service in namespace services-4021 +Oct 27 15:03:20.659: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:03:22.673: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true) +Oct 27 15:03:22.685: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-4021 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' +Oct 27 15:03:23.131: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" +Oct 27 15:03:23.131: INFO: stdout: "iptables" +Oct 27 15:03:23.131: INFO: proxyMode: iptables +Oct 27 15:03:23.154: INFO: Waiting for pod kube-proxy-mode-detector to disappear +Oct 27 15:03:23.165: INFO: Pod kube-proxy-mode-detector no longer exists +STEP: creating service affinity-clusterip-timeout in namespace services-4021 +STEP: creating replication controller affinity-clusterip-timeout in namespace services-4021 +I1027 15:03:23.196217 5683 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-4021, replica count: 3 +I1027 15:03:26.248021 5683 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Oct 27 15:03:26.270: INFO: Creating new exec pod +Oct 27 15:03:31.314: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-4021 exec execpod-affinity54rv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80' +Oct 27 15:03:31.687: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-timeout 80\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\n" +Oct 27 15:03:31.687: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 15:03:31.687: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-4021 exec execpod-affinity54rv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.67.133.49 80' +Oct 27 15:03:32.023: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 100.67.133.49 80\nConnection to 100.67.133.49 80 port [tcp/http] succeeded!\n" +Oct 27 15:03:32.023: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 15:03:32.024: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-4021 exec execpod-affinity54rv7 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://100.67.133.49:80/ ; done' +Oct 27 15:03:32.472: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.67.133.49:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.67.133.49:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.67.133.49:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.67.133.49:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.67.133.49:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.67.133.49:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.67.133.49:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.67.133.49:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.67.133.49:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.67.133.49:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.67.133.49:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.67.133.49:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.67.133.49:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.67.133.49:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.67.133.49:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.67.133.49:80/\n" +Oct 27 15:03:32.473: INFO: stdout: "\naffinity-clusterip-timeout-l9ppw\naffinity-clusterip-timeout-l9ppw\naffinity-clusterip-timeout-l9ppw\naffinity-clusterip-timeout-l9ppw\naffinity-clusterip-timeout-l9ppw\naffinity-clusterip-timeout-l9ppw\naffinity-clusterip-timeout-l9ppw\naffinity-clusterip-timeout-l9ppw\naffinity-clusterip-timeout-l9ppw\naffinity-clusterip-timeout-l9ppw\naffinity-clusterip-timeout-l9ppw\naffinity-clusterip-timeout-l9ppw\naffinity-clusterip-timeout-l9ppw\naffinity-clusterip-timeout-l9ppw\naffinity-clusterip-timeout-l9ppw\naffinity-clusterip-timeout-l9ppw" +Oct 27 15:03:32.473: INFO: Received response from host: affinity-clusterip-timeout-l9ppw +Oct 27 15:03:32.473: INFO: Received response from host: affinity-clusterip-timeout-l9ppw +Oct 27 15:03:32.473: INFO: Received response from host: affinity-clusterip-timeout-l9ppw +Oct 27 15:03:32.473: INFO: Received response from host: affinity-clusterip-timeout-l9ppw +Oct 27 15:03:32.473: INFO: Received response from host: affinity-clusterip-timeout-l9ppw +Oct 27 15:03:32.473: INFO: Received response from host: affinity-clusterip-timeout-l9ppw +Oct 27 15:03:32.473: INFO: Received response from host: affinity-clusterip-timeout-l9ppw +Oct 27 15:03:32.473: INFO: Received response from host: affinity-clusterip-timeout-l9ppw +Oct 27 15:03:32.473: INFO: Received response from host: affinity-clusterip-timeout-l9ppw +Oct 27 15:03:32.473: INFO: Received response from host: affinity-clusterip-timeout-l9ppw +Oct 27 15:03:32.473: INFO: Received response from host: affinity-clusterip-timeout-l9ppw +Oct 27 15:03:32.473: INFO: Received response from host: affinity-clusterip-timeout-l9ppw +Oct 27 15:03:32.473: INFO: Received response from host: affinity-clusterip-timeout-l9ppw +Oct 27 15:03:32.473: INFO: Received response from host: affinity-clusterip-timeout-l9ppw +Oct 27 15:03:32.473: INFO: Received response from host: affinity-clusterip-timeout-l9ppw +Oct 27 15:03:32.473: INFO: Received response from host: affinity-clusterip-timeout-l9ppw +Oct 27 15:03:32.473: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-4021 exec execpod-affinity54rv7 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://100.67.133.49:80/' +Oct 27 15:03:32.857: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://100.67.133.49:80/\n" +Oct 27 15:03:32.857: INFO: stdout: "affinity-clusterip-timeout-l9ppw" +Oct 27 15:03:52.857: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-4021 exec execpod-affinity54rv7 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://100.67.133.49:80/' +Oct 27 15:03:53.292: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://100.67.133.49:80/\n" +Oct 27 15:03:53.292: INFO: stdout: "affinity-clusterip-timeout-l9ppw" +Oct 27 15:04:13.292: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-4021 exec execpod-affinity54rv7 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://100.67.133.49:80/' +Oct 27 15:04:13.630: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://100.67.133.49:80/\n" +Oct 27 15:04:13.630: INFO: stdout: "affinity-clusterip-timeout-kdlhg" +Oct 27 15:04:13.630: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-4021, will wait for the garbage collector to delete the pods +Oct 27 15:04:13.724: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 13.228984ms +Oct 27 15:04:13.825: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 100.981646ms +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:04:15.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-4021" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":346,"completed":232,"skipped":4317,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl server-side dry-run + should check if kubectl can dry-run update Pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:04:15.786: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-6439 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should check if kubectl can dry-run update Pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 +Oct 27 15:04:16.129: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-6439 run e2e-test-httpd-pod --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 --pod-running-timeout=2m0s --labels=run=e2e-test-httpd-pod' +Oct 27 15:04:16.261: INFO: stderr: "" +Oct 27 15:04:16.262: INFO: stdout: "pod/e2e-test-httpd-pod created\n" +STEP: replace the image in the pod with server-side dry-run +Oct 27 15:04:16.262: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-6439 patch pod e2e-test-httpd-pod -p {"spec":{"containers":[{"name": "e2e-test-httpd-pod","image": "k8s.gcr.io/e2e-test-images/busybox:1.29-1"}]}} --dry-run=server' +Oct 27 15:04:16.535: INFO: stderr: "" +Oct 27 15:04:16.535: INFO: stdout: "pod/e2e-test-httpd-pod patched\n" +STEP: verifying the pod e2e-test-httpd-pod has the right image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 +Oct 27 15:04:16.547: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-6439 delete pods e2e-test-httpd-pod' +Oct 27 15:04:19.784: INFO: stderr: "" +Oct 27 15:04:19.784: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:04:19.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-6439" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":346,"completed":233,"skipped":4358,"failed":0} +SSS +------------------------------ +[sig-node] Lease + lease API should be available [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Lease + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:04:19.818: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename lease-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in lease-test-7824 +STEP: Waiting for a default service account to be provisioned in namespace +[It] lease API should be available [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-node] Lease + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:04:20.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "lease-test-7824" for this suite. +•{"msg":"PASSED [sig-node] Lease lease API should be available [Conformance]","total":346,"completed":234,"skipped":4361,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + should validate Statefulset Status endpoints [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:04:20.191: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename statefulset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-6337 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:92 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:107 +STEP: Creating service test in namespace statefulset-6337 +[It] should validate Statefulset Status endpoints [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating statefulset ss in namespace statefulset-6337 +Oct 27 15:04:20.433: INFO: Found 0 stateful pods, waiting for 1 +Oct 27 15:04:30.448: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: Patch Statefulset to include a label +STEP: Getting /status +Oct 27 15:04:30.498: INFO: StatefulSet ss has Conditions: []v1.StatefulSetCondition(nil) +STEP: updating the StatefulSet Status +Oct 27 15:04:30.522: INFO: updatedStatus.Conditions: []v1.StatefulSetCondition{v1.StatefulSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"E2E", Message:"Set from e2e test"}} +STEP: watching for the statefulset status to be updated +Oct 27 15:04:30.533: INFO: Observed &StatefulSet event: ADDED +Oct 27 15:04:30.533: INFO: Found Statefulset ss in namespace statefulset-6337 with labels: map[e2e:testing] annotations: map[] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} +Oct 27 15:04:30.533: INFO: Statefulset ss has an updated status +STEP: patching the Statefulset Status +Oct 27 15:04:30.533: INFO: Patch payload: {"status":{"conditions":[{"type":"StatusPatched","status":"True"}]}} +Oct 27 15:04:30.552: INFO: Patched status conditions: []v1.StatefulSetCondition{v1.StatefulSetCondition{Type:"StatusPatched", Status:"True", LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"", Message:""}} +STEP: watching for the Statefulset status to be patched +Oct 27 15:04:30.563: INFO: Observed &StatefulSet event: ADDED +Oct 27 15:04:30.563: INFO: Observed Statefulset ss in namespace statefulset-6337 with annotations: map[] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} +Oct 27 15:04:30.563: INFO: Observed &StatefulSet event: MODIFIED +Oct 27 15:04:30.563: INFO: Found Statefulset ss in namespace statefulset-6337 with labels: map[e2e:testing] annotations: map[] & Conditions: {StatusPatched True 0001-01-01 00:00:00 +0000 UTC } +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118 +Oct 27 15:04:30.563: INFO: Deleting all statefulset in ns statefulset-6337 +Oct 27 15:04:30.574: INFO: Scaling statefulset ss to 0 +Oct 27 15:04:40.621: INFO: Waiting for statefulset status.replicas updated to 0 +Oct 27 15:04:40.633: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:04:40.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-6337" for this suite. +•{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]","total":346,"completed":235,"skipped":4424,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + patching/updating a validating webhook should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:04:40.703: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-7723 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 27 15:04:41.242: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943881, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943881, loc:(*time.Location)(0xa09bc80)}}, Reason:"NewReplicaSetCreated", Message:"Created new replica set \"sample-webhook-deployment-78988fc6cd\""}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943881, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943881, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} +Oct 27 15:04:43.255: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943881, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943881, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943881, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770943881, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 27 15:04:46.271: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] patching/updating a validating webhook should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a validating webhook configuration +STEP: Creating a configMap that does not comply to the validation webhook rules +STEP: Updating a validating webhook configuration's rules to not include the create operation +STEP: Creating a configMap that does not comply to the validation webhook rules +STEP: Patching a validating webhook configuration's rules to include the create operation +STEP: Creating a configMap that does not comply to the validation webhook rules +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:04:46.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-7723" for this suite. +STEP: Destroying namespace "webhook-7723-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":346,"completed":236,"skipped":4439,"failed":0} +SSSSSSSS +------------------------------ +[sig-node] Variable Expansion + should allow substituting values in a container's command [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:04:46.656: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename var-expansion +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in var-expansion-4252 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should allow substituting values in a container's command [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test substitution in container's command +Oct 27 15:04:46.881: INFO: Waiting up to 5m0s for pod "var-expansion-1abb8f66-1b3a-48c4-a366-c2c4c7bd2093" in namespace "var-expansion-4252" to be "Succeeded or Failed" +Oct 27 15:04:46.892: INFO: Pod "var-expansion-1abb8f66-1b3a-48c4-a366-c2c4c7bd2093": Phase="Pending", Reason="", readiness=false. Elapsed: 10.874627ms +Oct 27 15:04:48.905: INFO: Pod "var-expansion-1abb8f66-1b3a-48c4-a366-c2c4c7bd2093": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024031531s +Oct 27 15:04:50.919: INFO: Pod "var-expansion-1abb8f66-1b3a-48c4-a366-c2c4c7bd2093": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038036323s +STEP: Saw pod success +Oct 27 15:04:50.919: INFO: Pod "var-expansion-1abb8f66-1b3a-48c4-a366-c2c4c7bd2093" satisfied condition "Succeeded or Failed" +Oct 27 15:04:50.930: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod var-expansion-1abb8f66-1b3a-48c4-a366-c2c4c7bd2093 container dapi-container: +STEP: delete the pod +Oct 27 15:04:51.088: INFO: Waiting for pod var-expansion-1abb8f66-1b3a-48c4-a366-c2c4c7bd2093 to disappear +Oct 27 15:04:51.099: INFO: Pod var-expansion-1abb8f66-1b3a-48c4-a366-c2c4c7bd2093 no longer exists +[AfterEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:04:51.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-4252" for this suite. +•{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":346,"completed":237,"skipped":4447,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:04:51.135: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-496 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating secret with name secret-test-bba0cbec-fe45-4676-95f3-19e285fabb0d +STEP: Creating a pod to test consume secrets +Oct 27 15:04:51.365: INFO: Waiting up to 5m0s for pod "pod-secrets-044f5c52-cb0f-4479-9949-71d6112441aa" in namespace "secrets-496" to be "Succeeded or Failed" +Oct 27 15:04:51.377: INFO: Pod "pod-secrets-044f5c52-cb0f-4479-9949-71d6112441aa": Phase="Pending", Reason="", readiness=false. Elapsed: 11.688665ms +Oct 27 15:04:53.390: INFO: Pod "pod-secrets-044f5c52-cb0f-4479-9949-71d6112441aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.024635497s +STEP: Saw pod success +Oct 27 15:04:53.390: INFO: Pod "pod-secrets-044f5c52-cb0f-4479-9949-71d6112441aa" satisfied condition "Succeeded or Failed" +Oct 27 15:04:53.402: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod pod-secrets-044f5c52-cb0f-4479-9949-71d6112441aa container secret-volume-test: +STEP: delete the pod +Oct 27 15:04:53.441: INFO: Waiting for pod pod-secrets-044f5c52-cb0f-4479-9949-71d6112441aa to disappear +Oct 27 15:04:53.452: INFO: Pod pod-secrets-044f5c52-cb0f-4479-9949-71d6112441aa no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:04:53.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-496" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":238,"skipped":4479,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-node] Kubelet when scheduling a busybox command that always fails in a pod + should be possible to delete [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:04:53.486: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubelet-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubelet-test-4713 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 +[BeforeEach] when scheduling a busybox command that always fails in a pod + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:82 +[It] should be possible to delete [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:04:53.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubelet-test-4713" for this suite. +•{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":346,"completed":239,"skipped":4491,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:04:53.732: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-9974 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating projection with secret that has name projected-secret-test-482a94c5-c009-4048-8d65-01baa6ca50aa +STEP: Creating a pod to test consume secrets +Oct 27 15:04:53.961: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-de115081-817a-4645-b7be-ce05c04b45a9" in namespace "projected-9974" to be "Succeeded or Failed" +Oct 27 15:04:53.972: INFO: Pod "pod-projected-secrets-de115081-817a-4645-b7be-ce05c04b45a9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.763313ms +Oct 27 15:04:55.984: INFO: Pod "pod-projected-secrets-de115081-817a-4645-b7be-ce05c04b45a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02322161s +Oct 27 15:04:57.996: INFO: Pod "pod-projected-secrets-de115081-817a-4645-b7be-ce05c04b45a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035600872s +STEP: Saw pod success +Oct 27 15:04:57.996: INFO: Pod "pod-projected-secrets-de115081-817a-4645-b7be-ce05c04b45a9" satisfied condition "Succeeded or Failed" +Oct 27 15:04:58.008: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod pod-projected-secrets-de115081-817a-4645-b7be-ce05c04b45a9 container projected-secret-volume-test: +STEP: delete the pod +Oct 27 15:04:58.047: INFO: Waiting for pod pod-projected-secrets-de115081-817a-4645-b7be-ce05c04b45a9 to disappear +Oct 27 15:04:58.058: INFO: Pod pod-projected-secrets-de115081-817a-4645-b7be-ce05c04b45a9 no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:04:58.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-9974" for this suite. +•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":240,"skipped":4504,"failed":0} +SSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:04:58.091: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-3882 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name projected-configmap-test-volume-bcb2503d-54d7-40e2-afed-952dd695f2c6 +STEP: Creating a pod to test consume configMaps +Oct 27 15:04:58.319: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-fac95635-6691-4773-8e48-b186a9b67910" in namespace "projected-3882" to be "Succeeded or Failed" +Oct 27 15:04:58.331: INFO: Pod "pod-projected-configmaps-fac95635-6691-4773-8e48-b186a9b67910": Phase="Pending", Reason="", readiness=false. Elapsed: 11.319761ms +Oct 27 15:05:00.343: INFO: Pod "pod-projected-configmaps-fac95635-6691-4773-8e48-b186a9b67910": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023853733s +Oct 27 15:05:02.359: INFO: Pod "pod-projected-configmaps-fac95635-6691-4773-8e48-b186a9b67910": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039347456s +STEP: Saw pod success +Oct 27 15:05:02.359: INFO: Pod "pod-projected-configmaps-fac95635-6691-4773-8e48-b186a9b67910" satisfied condition "Succeeded or Failed" +Oct 27 15:05:02.370: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod pod-projected-configmaps-fac95635-6691-4773-8e48-b186a9b67910 container agnhost-container: +STEP: delete the pod +Oct 27 15:05:02.409: INFO: Waiting for pod pod-projected-configmaps-fac95635-6691-4773-8e48-b186a9b67910 to disappear +Oct 27 15:05:02.421: INFO: Pod pod-projected-configmaps-fac95635-6691-4773-8e48-b186a9b67910 no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:05:02.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-3882" for this suite. +•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":346,"completed":241,"skipped":4507,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Servers with support for Table transformation + should return a 406 for a backend which does not implement metadata [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Servers with support for Table transformation + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:05:02.454: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename tables +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in tables-4699 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] Servers with support for Table transformation + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 +[It] should return a 406 for a backend which does not implement metadata [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-api-machinery] Servers with support for Table transformation + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:05:02.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "tables-4699" for this suite. +•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":346,"completed":242,"skipped":4520,"failed":0} +SSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a pod. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:05:02.700: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename resourcequota +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-2524 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should create a ResourceQuota and capture the life of a pod. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Counting existing ResourceQuota +STEP: Creating a ResourceQuota +STEP: Ensuring resource quota status is calculated +STEP: Creating a Pod that fits quota +STEP: Ensuring ResourceQuota status captures the pod usage +STEP: Not allowing a pod to be created that exceeds remaining quota +STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) +STEP: Ensuring a pod cannot update its resource requirements +STEP: Ensuring attempts to update pod resource requirements did not change quota usage +STEP: Deleting the pod +STEP: Ensuring resource quota status released the pod usage +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:05:16.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-2524" for this suite. +•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":346,"completed":243,"skipped":4525,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicationController + should serve a basic image on each replica with a public image [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:05:16.124: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename replication-controller +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replication-controller-6720 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 +[It] should serve a basic image on each replica with a public image [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating replication controller my-hostname-basic-797bc35a-2efc-4b9b-8493-8e09df01eb31 +Oct 27 15:05:16.336: INFO: Pod name my-hostname-basic-797bc35a-2efc-4b9b-8493-8e09df01eb31: Found 0 pods out of 1 +Oct 27 15:05:21.349: INFO: Pod name my-hostname-basic-797bc35a-2efc-4b9b-8493-8e09df01eb31: Found 1 pods out of 1 +Oct 27 15:05:21.349: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-797bc35a-2efc-4b9b-8493-8e09df01eb31" are running +Oct 27 15:05:21.360: INFO: Pod "my-hostname-basic-797bc35a-2efc-4b9b-8493-8e09df01eb31-tssr5" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-27 15:05:16 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-27 15:05:18 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-27 15:05:18 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-27 15:05:16 +0000 UTC Reason: Message:}]) +Oct 27 15:05:21.360: INFO: Trying to dial the pod +Oct 27 15:05:26.419: INFO: Controller my-hostname-basic-797bc35a-2efc-4b9b-8493-8e09df01eb31: Got expected result from replica 1 [my-hostname-basic-797bc35a-2efc-4b9b-8493-8e09df01eb31-tssr5]: "my-hostname-basic-797bc35a-2efc-4b9b-8493-8e09df01eb31-tssr5", 1 of 1 required successes so far +[AfterEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:05:26.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replication-controller-6720" for this suite. +•{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":346,"completed":244,"skipped":4558,"failed":0} +SSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should rollback without unnecessary restarts [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:05:26.454: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename daemonsets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in daemonsets-237 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:142 +[It] should rollback without unnecessary restarts [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:05:26.712: INFO: Create a RollingUpdate DaemonSet +Oct 27 15:05:26.724: INFO: Check that daemon pods launch on every node of the cluster +Oct 27 15:05:26.747: INFO: Number of nodes with available pods: 0 +Oct 27 15:05:26.747: INFO: Node shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5 is running more than one daemon pod +Oct 27 15:05:27.781: INFO: Number of nodes with available pods: 0 +Oct 27 15:05:27.781: INFO: Node shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5 is running more than one daemon pod +Oct 27 15:05:28.933: INFO: Number of nodes with available pods: 1 +Oct 27 15:05:28.933: INFO: Node shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5 is running more than one daemon pod +Oct 27 15:05:29.781: INFO: Number of nodes with available pods: 2 +Oct 27 15:05:29.781: INFO: Number of running nodes: 2, number of available pods: 2 +Oct 27 15:05:29.781: INFO: Update the DaemonSet to trigger a rollout +Oct 27 15:05:29.806: INFO: Updating DaemonSet daemon-set +Oct 27 15:05:31.864: INFO: Roll back the DaemonSet before rollout is complete +Oct 27 15:05:31.890: INFO: Updating DaemonSet daemon-set +Oct 27 15:05:31.890: INFO: Make sure DaemonSet rollback is complete +Oct 27 15:05:36.927: INFO: Pod daemon-set-l64qx is not available +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:108 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-237, will wait for the garbage collector to delete the pods +Oct 27 15:05:37.047: INFO: Deleting DaemonSet.extensions daemon-set took: 13.171834ms +Oct 27 15:05:37.147: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.2054ms +Oct 27 15:05:39.260: INFO: Number of nodes with available pods: 0 +Oct 27 15:05:39.260: INFO: Number of running nodes: 0, number of available pods: 0 +Oct 27 15:05:39.271: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"31783"},"items":null} + +Oct 27 15:05:39.282: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"31783"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:05:39.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-237" for this suite. +•{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":346,"completed":245,"skipped":4562,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-apps] CronJob + should support CronJob API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:05:39.353: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename cronjob +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in cronjob-3273 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support CronJob API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a cronjob +STEP: creating +STEP: getting +STEP: listing +STEP: watching +Oct 27 15:05:39.577: INFO: starting watch +STEP: cluster-wide listing +STEP: cluster-wide watching +Oct 27 15:05:39.599: INFO: starting watch +STEP: patching +STEP: updating +Oct 27 15:05:39.652: INFO: waiting for watch events with expected annotations +Oct 27 15:05:39.652: INFO: saw patched and updated annotations +STEP: patching /status +STEP: updating /status +STEP: get /status +STEP: deleting +STEP: deleting a collection +[AfterEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:05:39.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "cronjob-3273" for this suite. +•{"msg":"PASSED [sig-apps] CronJob should support CronJob API operations [Conformance]","total":346,"completed":246,"skipped":4575,"failed":0} +SSSSSSS +------------------------------ +[sig-node] Sysctls [LinuxOnly] [NodeConformance] + should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:36 +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:05:39.794: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename sysctl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sysctl-9357 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:65 +[It] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod with one valid and two invalid sysctls +[AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:05:40.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sysctl-9357" for this suite. +•{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":346,"completed":247,"skipped":4582,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicaSet + should validate Replicaset Status endpoints [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:05:40.028: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename replicaset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replicaset-7038 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should validate Replicaset Status endpoints [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Create a Replicaset +STEP: Verify that the required pods have come up. +Oct 27 15:05:40.260: INFO: Pod name sample-pod: Found 0 pods out of 1 +Oct 27 15:05:45.272: INFO: Pod name sample-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running +STEP: Getting /status +Oct 27 15:05:45.330: INFO: Replicaset test-rs has Conditions: [] +STEP: updating the Replicaset Status +Oct 27 15:05:45.355: INFO: updatedStatus.Conditions: []v1.ReplicaSetCondition{v1.ReplicaSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"E2E", Message:"Set from e2e test"}} +STEP: watching for the ReplicaSet status to be updated +Oct 27 15:05:45.367: INFO: Observed &ReplicaSet event: ADDED +Oct 27 15:05:45.368: INFO: Observed &ReplicaSet event: MODIFIED +Oct 27 15:05:45.368: INFO: Observed &ReplicaSet event: MODIFIED +Oct 27 15:05:45.368: INFO: Observed &ReplicaSet event: MODIFIED +Oct 27 15:05:45.368: INFO: Found replicaset test-rs in namespace replicaset-7038 with labels: map[name:sample-pod pod:httpd] annotations: map[] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] +Oct 27 15:05:45.368: INFO: Replicaset test-rs has an updated status +STEP: patching the Replicaset Status +Oct 27 15:05:45.368: INFO: Patch payload: {"status":{"conditions":[{"type":"StatusPatched","status":"True"}]}} +Oct 27 15:05:45.387: INFO: Patched status conditions: []v1.ReplicaSetCondition{v1.ReplicaSetCondition{Type:"StatusPatched", Status:"True", LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"", Message:""}} +STEP: watching for the Replicaset status to be patched +Oct 27 15:05:45.397: INFO: Observed &ReplicaSet event: ADDED +Oct 27 15:05:45.397: INFO: Observed &ReplicaSet event: MODIFIED +Oct 27 15:05:45.397: INFO: Observed &ReplicaSet event: MODIFIED +Oct 27 15:05:45.398: INFO: Observed &ReplicaSet event: MODIFIED +Oct 27 15:05:45.398: INFO: Observed replicaset test-rs in namespace replicaset-7038 with annotations: map[] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} +Oct 27 15:05:45.398: INFO: Observed &ReplicaSet event: MODIFIED +Oct 27 15:05:45.398: INFO: Found replicaset test-rs in namespace replicaset-7038 with labels: map[name:sample-pod pod:httpd] annotations: map[] & Conditions: {StatusPatched True 0001-01-01 00:00:00 +0000 UTC } +Oct 27 15:05:45.398: INFO: Replicaset test-rs has a patched status +[AfterEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:05:45.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replicaset-7038" for this suite. +•{"msg":"PASSED [sig-apps] ReplicaSet should validate Replicaset Status endpoints [Conformance]","total":346,"completed":248,"skipped":4605,"failed":0} +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + should have a working scale subresource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:05:45.452: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename statefulset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-4424 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:92 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:107 +STEP: Creating service test in namespace statefulset-4424 +[It] should have a working scale subresource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating statefulset ss in namespace statefulset-4424 +Oct 27 15:05:45.720: INFO: Found 0 stateful pods, waiting for 1 +Oct 27 15:05:55.734: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: getting scale subresource +STEP: updating a scale subresource +STEP: verifying the statefulset Spec.Replicas was modified +STEP: Patch a scale subresource +STEP: verifying the statefulset Spec.Replicas was modified +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118 +Oct 27 15:05:55.831: INFO: Deleting all statefulset in ns statefulset-4424 +Oct 27 15:05:55.842: INFO: Scaling statefulset ss to 0 +Oct 27 15:06:05.893: INFO: Waiting for statefulset status.replicas updated to 0 +Oct 27 15:06:05.904: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:06:05.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-4424" for this suite. +•{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":346,"completed":249,"skipped":4623,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should list and delete a collection of DaemonSets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:06:05.973: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename daemonsets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in daemonsets-8040 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:142 +[It] should list and delete a collection of DaemonSets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating simple DaemonSet "daemon-set" +STEP: Check that daemon pods launch on every node of the cluster. +Oct 27 15:06:06.249: INFO: Number of nodes with available pods: 0 +Oct 27 15:06:06.249: INFO: Node shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5 is running more than one daemon pod +Oct 27 15:06:07.282: INFO: Number of nodes with available pods: 0 +Oct 27 15:06:07.282: INFO: Node shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5 is running more than one daemon pod +Oct 27 15:06:08.282: INFO: Number of nodes with available pods: 1 +Oct 27 15:06:08.282: INFO: Node shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5 is running more than one daemon pod +Oct 27 15:06:09.283: INFO: Number of nodes with available pods: 2 +Oct 27 15:06:09.283: INFO: Number of running nodes: 2, number of available pods: 2 +STEP: listing all DeamonSets +STEP: DeleteCollection of the DaemonSets +STEP: Verify that ReplicaSets have been deleted +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:108 +Oct 27 15:06:09.363: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"32071"},"items":null} + +Oct 27 15:06:09.375: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"32071"},"items":[{"metadata":{"name":"daemon-set-kchxf","generateName":"daemon-set-","namespace":"daemonsets-8040","uid":"2bc316e5-b034-458d-a726-ac45e6f5046a","resourceVersion":"32070","creationTimestamp":"2021-10-27T15:06:06Z","deletionTimestamp":"2021-10-27T15:06:39Z","deletionGracePeriodSeconds":30,"labels":{"controller-revision-hash":"577749b6b","daemonset-name":"daemon-set","pod-template-generation":"1"},"annotations":{"cni.projectcalico.org/containerID":"99808d99f8d71a6950f58eaf8128596c8e1b11d64819526fc44d5e1c5ad1f84e","cni.projectcalico.org/podIP":"100.96.0.98/32","cni.projectcalico.org/podIPs":"100.96.0.98/32","kubernetes.io/psp":"e2e-test-privileged-psp"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"b2a0756c-873d-4a86-a684-556b51c9598e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-10-27T15:06:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b2a0756c-873d-4a86-a684-556b51c9598e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"calico","operation":"Update","apiVersion":"v1","time":"2021-10-27T15:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}},"subresource":"status"},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-10-27T15:06:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.0.98\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-zwdf2","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1","ports":[{"containerPort":9376,"protocol":"TCP"}],"env":[{"name":"KUBERNETES_SERVICE_HOST","value":"api.tmtq6-2hp.it.internal.staging.k8s.ondemand.com"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-zwdf2","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-10-27T15:06:06Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-10-27T15:06:08Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-10-27T15:06:08Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-10-27T15:06:06Z"}],"hostIP":"10.250.0.2","podIP":"100.96.0.98","podIPs":[{"ip":"100.96.0.98"}],"startTime":"2021-10-27T15:06:06Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2021-10-27T15:06:07Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1","imageID":"docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50","containerID":"docker://9f40e372417555b33376fac8810f6c569e42d154117d08266dcb8f387a5794df","started":true}],"qosClass":"BestEffort"}},{"metadata":{"name":"daemon-set-l5kn9","generateName":"daemon-set-","namespace":"daemonsets-8040","uid":"5d2a7bc8-fa77-461a-bad8-09374ec03b40","resourceVersion":"32071","creationTimestamp":"2021-10-27T15:06:06Z","deletionTimestamp":"2021-10-27T15:06:39Z","deletionGracePeriodSeconds":30,"labels":{"controller-revision-hash":"577749b6b","daemonset-name":"daemon-set","pod-template-generation":"1"},"annotations":{"cni.projectcalico.org/containerID":"a63f4438c2b2e8e2a44a849d2352eb029459720259579abf6aa32af36f5bce65","cni.projectcalico.org/podIP":"100.96.1.4/32","cni.projectcalico.org/podIPs":"100.96.1.4/32","kubernetes.io/psp":"e2e-test-privileged-psp"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"b2a0756c-873d-4a86-a684-556b51c9598e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-10-27T15:06:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b2a0756c-873d-4a86-a684-556b51c9598e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"calico","operation":"Update","apiVersion":"v1","time":"2021-10-27T15:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}},"subresource":"status"},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-10-27T15:06:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.1.4\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-t7ncx","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1","ports":[{"containerPort":9376,"protocol":"TCP"}],"env":[{"name":"KUBERNETES_SERVICE_HOST","value":"api.tmtq6-2hp.it.internal.staging.k8s.ondemand.com"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-t7ncx","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-10-27T15:06:06Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-10-27T15:06:08Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-10-27T15:06:08Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-10-27T15:06:06Z"}],"hostIP":"10.250.0.3","podIP":"100.96.1.4","podIPs":[{"ip":"100.96.1.4"}],"startTime":"2021-10-27T15:06:06Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2021-10-27T15:06:07Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1","imageID":"docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50","containerID":"docker://8a507250296033b2ca59d5634e070918412f8d2a2bdfe036a619667ab37dc2b9","started":true}],"qosClass":"BestEffort"}}]} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:06:09.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-8040" for this suite. +•{"msg":"PASSED [sig-apps] Daemon set [Serial] should list and delete a collection of DaemonSets [Conformance]","total":346,"completed":250,"skipped":4646,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should verify ResourceQuota with best effort scope. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:06:09.437: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename resourcequota +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-5796 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should verify ResourceQuota with best effort scope. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a ResourceQuota with best effort scope +STEP: Ensuring ResourceQuota status is calculated +STEP: Creating a ResourceQuota with not best effort scope +STEP: Ensuring ResourceQuota status is calculated +STEP: Creating a best-effort pod +STEP: Ensuring resource quota with best effort scope captures the pod usage +STEP: Ensuring resource quota with not best effort ignored the pod usage +STEP: Deleting the pod +STEP: Ensuring resource quota status released the pod usage +STEP: Creating a not best-effort pod +STEP: Ensuring resource quota with not best effort scope captures the pod usage +STEP: Ensuring resource quota with best effort scope ignored the pod usage +STEP: Deleting the pod +STEP: Ensuring resource quota status released the pod usage +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:06:26.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-5796" for this suite. +•{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":346,"completed":251,"skipped":4669,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Downward API + should provide host IP as an env var [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:06:26.118: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-8201 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide host IP as an env var [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward api env vars +Oct 27 15:06:26.330: INFO: Waiting up to 5m0s for pod "downward-api-67a87d74-faa9-408b-b5aa-c143a3404ea1" in namespace "downward-api-8201" to be "Succeeded or Failed" +Oct 27 15:06:26.341: INFO: Pod "downward-api-67a87d74-faa9-408b-b5aa-c143a3404ea1": Phase="Pending", Reason="", readiness=false. Elapsed: 11.792559ms +Oct 27 15:06:28.353: INFO: Pod "downward-api-67a87d74-faa9-408b-b5aa-c143a3404ea1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023471093s +Oct 27 15:06:30.365: INFO: Pod "downward-api-67a87d74-faa9-408b-b5aa-c143a3404ea1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03580006s +STEP: Saw pod success +Oct 27 15:06:30.366: INFO: Pod "downward-api-67a87d74-faa9-408b-b5aa-c143a3404ea1" satisfied condition "Succeeded or Failed" +Oct 27 15:06:30.378: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod downward-api-67a87d74-faa9-408b-b5aa-c143a3404ea1 container dapi-container: +STEP: delete the pod +Oct 27 15:06:30.421: INFO: Waiting for pod downward-api-67a87d74-faa9-408b-b5aa-c143a3404ea1 to disappear +Oct 27 15:06:30.432: INFO: Pod downward-api-67a87d74-faa9-408b-b5aa-c143a3404ea1 no longer exists +[AfterEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:06:30.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-8201" for this suite. +•{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":346,"completed":252,"skipped":4693,"failed":0} +SSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a secret. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:06:30.467: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename resourcequota +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-6998 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should create a ResourceQuota and capture the life of a secret. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Discovering how many secrets are in namespace by default +STEP: Counting existing ResourceQuota +STEP: Creating a ResourceQuota +STEP: Ensuring resource quota status is calculated +STEP: Creating a Secret +STEP: Ensuring resource quota status captures secret creation +STEP: Deleting a secret +STEP: Ensuring resource quota status released usage +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:06:47.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-6998" for this suite. +•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":346,"completed":253,"skipped":4698,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Watchers + should observe add, update, and delete watch notifications on configmaps [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:06:47.823: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename watch +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in watch-6052 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should observe add, update, and delete watch notifications on configmaps [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a watch on configmaps with label A +STEP: creating a watch on configmaps with label B +STEP: creating a watch on configmaps with label A or B +STEP: creating a configmap with label A and ensuring the correct watchers observe the notification +Oct 27 15:06:48.058: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6052 86bce197-1b12-43e6-8945-c37abd8fb62d 32351 0 2021-10-27 15:06:48 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-10-27 15:06:48 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 27 15:06:48.058: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6052 86bce197-1b12-43e6-8945-c37abd8fb62d 32351 0 2021-10-27 15:06:48 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-10-27 15:06:48 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: modifying configmap A and ensuring the correct watchers observe the notification +Oct 27 15:06:58.084: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6052 86bce197-1b12-43e6-8945-c37abd8fb62d 32403 0 2021-10-27 15:06:48 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-10-27 15:06:58 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 27 15:06:58.084: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6052 86bce197-1b12-43e6-8945-c37abd8fb62d 32403 0 2021-10-27 15:06:48 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-10-27 15:06:58 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: modifying configmap A again and ensuring the correct watchers observe the notification +Oct 27 15:07:08.108: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6052 86bce197-1b12-43e6-8945-c37abd8fb62d 32447 0 2021-10-27 15:06:48 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-10-27 15:06:58 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 27 15:07:08.109: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6052 86bce197-1b12-43e6-8945-c37abd8fb62d 32447 0 2021-10-27 15:06:48 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-10-27 15:06:58 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: deleting configmap A and ensuring the correct watchers observe the notification +Oct 27 15:07:18.123: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6052 86bce197-1b12-43e6-8945-c37abd8fb62d 32490 0 2021-10-27 15:06:48 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-10-27 15:06:58 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 27 15:07:18.123: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6052 86bce197-1b12-43e6-8945-c37abd8fb62d 32490 0 2021-10-27 15:06:48 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-10-27 15:06:58 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: creating a configmap with label B and ensuring the correct watchers observe the notification +Oct 27 15:07:28.140: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-6052 62f77ea7-4434-4912-a606-5acf879438f1 32533 0 2021-10-27 15:07:28 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-10-27 15:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 27 15:07:28.141: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-6052 62f77ea7-4434-4912-a606-5acf879438f1 32533 0 2021-10-27 15:07:28 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-10-27 15:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: deleting configmap B and ensuring the correct watchers observe the notification +Oct 27 15:07:38.156: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-6052 62f77ea7-4434-4912-a606-5acf879438f1 32599 0 2021-10-27 15:07:28 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-10-27 15:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 27 15:07:38.156: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-6052 62f77ea7-4434-4912-a606-5acf879438f1 32599 0 2021-10-27 15:07:28 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-10-27 15:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +[AfterEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:07:48.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "watch-6052" for this suite. +•{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":346,"completed":254,"skipped":4789,"failed":0} + +------------------------------ +[sig-network] DNS + should provide DNS for the cluster [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:07:48.193: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename dns +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-5803 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide DNS for the cluster [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5803.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5803.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done + +STEP: creating a pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Oct 27 15:07:52.659: INFO: DNS probes using dns-5803/dns-test-6a2ac251-fcaf-416d-8fea-378b9ad0213f succeeded + +STEP: deleting the pod +[AfterEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:07:52.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-5803" for this suite. +•{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":346,"completed":255,"skipped":4789,"failed":0} +S +------------------------------ +[sig-api-machinery] Garbage collector + should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:07:52.703: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename gc +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-3681 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the rc1 +STEP: create the rc2 +STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well +STEP: delete the rc simpletest-rc-to-be-deleted +STEP: wait for the rc to be deleted +STEP: Gathering metrics +Oct 27 15:08:03.415: INFO: For apiserver_request_total: +For apiserver_request_latency_seconds: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +Oct 27 15:08:03.415: INFO: Deleting pod "simpletest-rc-to-be-deleted-2jx28" in namespace "gc-3681" +W1027 15:08:03.414975 5683 metrics_grabber.go:151] Can't find kube-controller-manager pod. Grabbing metrics from kube-controller-manager is disabled. +Oct 27 15:08:03.432: INFO: Deleting pod "simpletest-rc-to-be-deleted-7wdlh" in namespace "gc-3681" +Oct 27 15:08:03.447: INFO: Deleting pod "simpletest-rc-to-be-deleted-85jw4" in namespace "gc-3681" +Oct 27 15:08:03.465: INFO: Deleting pod "simpletest-rc-to-be-deleted-c6f5l" in namespace "gc-3681" +Oct 27 15:08:03.481: INFO: Deleting pod "simpletest-rc-to-be-deleted-kvkpl" in namespace "gc-3681" +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:08:03.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-3681" for this suite. +•{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":346,"completed":256,"skipped":4790,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:08:03.544: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-8800 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating service in namespace services-8800 +STEP: creating service affinity-nodeport-transition in namespace services-8800 +STEP: creating replication controller affinity-nodeport-transition in namespace services-8800 +I1027 15:08:03.768031 5683 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-8800, replica count: 3 +I1027 15:08:06.819278 5683 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Oct 27 15:08:06.863: INFO: Creating new exec pod +Oct 27 15:08:11.927: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-8800 exec execpod-affinitywb5kh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' +Oct 27 15:08:12.318: INFO: stderr: "+ nc -v -t -w 2 affinity-nodeport-transition 80\n+ echo hostName\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" +Oct 27 15:08:12.318: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 15:08:12.318: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-8800 exec execpod-affinitywb5kh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.67.224.163 80' +Oct 27 15:08:12.681: INFO: stderr: "+ + ncecho -v hostName -t\n -w 2 100.67.224.163 80\nConnection to 100.67.224.163 80 port [tcp/http] succeeded!\n" +Oct 27 15:08:12.681: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 15:08:12.681: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-8800 exec execpod-affinitywb5kh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.250.0.2 32600' +Oct 27 15:08:13.064: INFO: stderr: "+ nc -v -t -w 2 10.250.0.2 32600\nConnection to 10.250.0.2 32600 port [tcp/*] succeeded!\n+ echo hostName\n" +Oct 27 15:08:13.064: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 15:08:13.064: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-8800 exec execpod-affinitywb5kh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.250.0.3 32600' +Oct 27 15:08:13.444: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.250.0.3 32600\nConnection to 10.250.0.3 32600 port [tcp/*] succeeded!\n" +Oct 27 15:08:13.444: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 15:08:13.471: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-8800 exec execpod-affinitywb5kh -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.250.0.2:32600/ ; done' +Oct 27 15:08:13.900: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:32600/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:32600/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:32600/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:32600/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:32600/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:32600/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:32600/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:32600/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:32600/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:32600/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:32600/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:32600/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:32600/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:32600/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:32600/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:32600/\n" +Oct 27 15:08:13.901: INFO: stdout: "\naffinity-nodeport-transition-88kms\naffinity-nodeport-transition-88kms\naffinity-nodeport-transition-88kms\naffinity-nodeport-transition-88kms\naffinity-nodeport-transition-88kms\naffinity-nodeport-transition-88kms\naffinity-nodeport-transition-88kms\naffinity-nodeport-transition-88kms\naffinity-nodeport-transition-88kms\naffinity-nodeport-transition-88kms\naffinity-nodeport-transition-88kms\naffinity-nodeport-transition-88kms\naffinity-nodeport-transition-88kms\naffinity-nodeport-transition-88kms\naffinity-nodeport-transition-88kms\naffinity-nodeport-transition-88kms" +Oct 27 15:08:13.901: INFO: Received response from host: affinity-nodeport-transition-88kms +Oct 27 15:08:13.901: INFO: Received response from host: affinity-nodeport-transition-88kms +Oct 27 15:08:13.901: INFO: Received response from host: affinity-nodeport-transition-88kms +Oct 27 15:08:13.901: INFO: Received response from host: affinity-nodeport-transition-88kms +Oct 27 15:08:13.901: INFO: Received response from host: affinity-nodeport-transition-88kms +Oct 27 15:08:13.901: INFO: Received response from host: affinity-nodeport-transition-88kms +Oct 27 15:08:13.901: INFO: Received response from host: affinity-nodeport-transition-88kms +Oct 27 15:08:13.901: INFO: Received response from host: affinity-nodeport-transition-88kms +Oct 27 15:08:13.901: INFO: Received response from host: affinity-nodeport-transition-88kms +Oct 27 15:08:13.901: INFO: Received response from host: affinity-nodeport-transition-88kms +Oct 27 15:08:13.901: INFO: Received response from host: affinity-nodeport-transition-88kms +Oct 27 15:08:13.901: INFO: Received response from host: affinity-nodeport-transition-88kms +Oct 27 15:08:13.901: INFO: Received response from host: affinity-nodeport-transition-88kms +Oct 27 15:08:13.901: INFO: Received response from host: affinity-nodeport-transition-88kms +Oct 27 15:08:13.901: INFO: Received response from host: affinity-nodeport-transition-88kms +Oct 27 15:08:13.901: INFO: Received response from host: affinity-nodeport-transition-88kms +Oct 27 15:08:43.901: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-8800 exec execpod-affinitywb5kh -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.250.0.2:32600/ ; done' +Oct 27 15:08:44.388: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:32600/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:32600/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:32600/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:32600/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:32600/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:32600/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:32600/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:32600/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:32600/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:32600/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:32600/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:32600/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:32600/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:32600/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:32600/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:32600/\n" +Oct 27 15:08:44.388: INFO: stdout: "\naffinity-nodeport-transition-nrfdd\naffinity-nodeport-transition-nrfdd\naffinity-nodeport-transition-88kms\naffinity-nodeport-transition-kbrg7\naffinity-nodeport-transition-88kms\naffinity-nodeport-transition-kbrg7\naffinity-nodeport-transition-nrfdd\naffinity-nodeport-transition-88kms\naffinity-nodeport-transition-kbrg7\naffinity-nodeport-transition-kbrg7\naffinity-nodeport-transition-kbrg7\naffinity-nodeport-transition-88kms\naffinity-nodeport-transition-kbrg7\naffinity-nodeport-transition-nrfdd\naffinity-nodeport-transition-kbrg7\naffinity-nodeport-transition-88kms" +Oct 27 15:08:44.388: INFO: Received response from host: affinity-nodeport-transition-nrfdd +Oct 27 15:08:44.388: INFO: Received response from host: affinity-nodeport-transition-nrfdd +Oct 27 15:08:44.388: INFO: Received response from host: affinity-nodeport-transition-88kms +Oct 27 15:08:44.388: INFO: Received response from host: affinity-nodeport-transition-kbrg7 +Oct 27 15:08:44.388: INFO: Received response from host: affinity-nodeport-transition-88kms +Oct 27 15:08:44.388: INFO: Received response from host: affinity-nodeport-transition-kbrg7 +Oct 27 15:08:44.388: INFO: Received response from host: affinity-nodeport-transition-nrfdd +Oct 27 15:08:44.388: INFO: Received response from host: affinity-nodeport-transition-88kms +Oct 27 15:08:44.388: INFO: Received response from host: affinity-nodeport-transition-kbrg7 +Oct 27 15:08:44.388: INFO: Received response from host: affinity-nodeport-transition-kbrg7 +Oct 27 15:08:44.388: INFO: Received response from host: affinity-nodeport-transition-kbrg7 +Oct 27 15:08:44.388: INFO: Received response from host: affinity-nodeport-transition-88kms +Oct 27 15:08:44.388: INFO: Received response from host: affinity-nodeport-transition-kbrg7 +Oct 27 15:08:44.388: INFO: Received response from host: affinity-nodeport-transition-nrfdd +Oct 27 15:08:44.388: INFO: Received response from host: affinity-nodeport-transition-kbrg7 +Oct 27 15:08:44.388: INFO: Received response from host: affinity-nodeport-transition-88kms +Oct 27 15:08:44.416: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-8800 exec execpod-affinitywb5kh -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.250.0.2:32600/ ; done' +Oct 27 15:08:44.878: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:32600/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:32600/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:32600/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:32600/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:32600/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:32600/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:32600/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:32600/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:32600/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:32600/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:32600/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:32600/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:32600/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:32600/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:32600/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:32600/\n" +Oct 27 15:08:44.878: INFO: stdout: "\naffinity-nodeport-transition-nrfdd\naffinity-nodeport-transition-88kms\naffinity-nodeport-transition-nrfdd\naffinity-nodeport-transition-88kms\naffinity-nodeport-transition-kbrg7\naffinity-nodeport-transition-88kms\naffinity-nodeport-transition-nrfdd\naffinity-nodeport-transition-kbrg7\naffinity-nodeport-transition-88kms\naffinity-nodeport-transition-nrfdd\naffinity-nodeport-transition-88kms\naffinity-nodeport-transition-kbrg7\naffinity-nodeport-transition-88kms\naffinity-nodeport-transition-88kms\naffinity-nodeport-transition-nrfdd\naffinity-nodeport-transition-nrfdd" +Oct 27 15:08:44.878: INFO: Received response from host: affinity-nodeport-transition-nrfdd +Oct 27 15:08:44.878: INFO: Received response from host: affinity-nodeport-transition-88kms +Oct 27 15:08:44.878: INFO: Received response from host: affinity-nodeport-transition-nrfdd +Oct 27 15:08:44.878: INFO: Received response from host: affinity-nodeport-transition-88kms +Oct 27 15:08:44.878: INFO: Received response from host: affinity-nodeport-transition-kbrg7 +Oct 27 15:08:44.878: INFO: Received response from host: affinity-nodeport-transition-88kms +Oct 27 15:08:44.878: INFO: Received response from host: affinity-nodeport-transition-nrfdd +Oct 27 15:08:44.878: INFO: Received response from host: affinity-nodeport-transition-kbrg7 +Oct 27 15:08:44.878: INFO: Received response from host: affinity-nodeport-transition-88kms +Oct 27 15:08:44.878: INFO: Received response from host: affinity-nodeport-transition-nrfdd +Oct 27 15:08:44.878: INFO: Received response from host: affinity-nodeport-transition-88kms +Oct 27 15:08:44.878: INFO: Received response from host: affinity-nodeport-transition-kbrg7 +Oct 27 15:08:44.878: INFO: Received response from host: affinity-nodeport-transition-88kms +Oct 27 15:08:44.878: INFO: Received response from host: affinity-nodeport-transition-88kms +Oct 27 15:08:44.878: INFO: Received response from host: affinity-nodeport-transition-nrfdd +Oct 27 15:08:44.878: INFO: Received response from host: affinity-nodeport-transition-nrfdd +Oct 27 15:09:14.879: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-8800 exec execpod-affinitywb5kh -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.250.0.2:32600/ ; done' +Oct 27 15:09:15.428: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:32600/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:32600/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:32600/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:32600/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:32600/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:32600/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:32600/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:32600/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:32600/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:32600/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:32600/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:32600/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:32600/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:32600/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:32600/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:32600/\n" +Oct 27 15:09:15.428: INFO: stdout: "\naffinity-nodeport-transition-88kms\naffinity-nodeport-transition-88kms\naffinity-nodeport-transition-88kms\naffinity-nodeport-transition-88kms\naffinity-nodeport-transition-88kms\naffinity-nodeport-transition-88kms\naffinity-nodeport-transition-88kms\naffinity-nodeport-transition-88kms\naffinity-nodeport-transition-88kms\naffinity-nodeport-transition-88kms\naffinity-nodeport-transition-88kms\naffinity-nodeport-transition-88kms\naffinity-nodeport-transition-88kms\naffinity-nodeport-transition-88kms\naffinity-nodeport-transition-88kms\naffinity-nodeport-transition-88kms" +Oct 27 15:09:15.428: INFO: Received response from host: affinity-nodeport-transition-88kms +Oct 27 15:09:15.428: INFO: Received response from host: affinity-nodeport-transition-88kms +Oct 27 15:09:15.428: INFO: Received response from host: affinity-nodeport-transition-88kms +Oct 27 15:09:15.428: INFO: Received response from host: affinity-nodeport-transition-88kms +Oct 27 15:09:15.428: INFO: Received response from host: affinity-nodeport-transition-88kms +Oct 27 15:09:15.428: INFO: Received response from host: affinity-nodeport-transition-88kms +Oct 27 15:09:15.428: INFO: Received response from host: affinity-nodeport-transition-88kms +Oct 27 15:09:15.428: INFO: Received response from host: affinity-nodeport-transition-88kms +Oct 27 15:09:15.428: INFO: Received response from host: affinity-nodeport-transition-88kms +Oct 27 15:09:15.428: INFO: Received response from host: affinity-nodeport-transition-88kms +Oct 27 15:09:15.428: INFO: Received response from host: affinity-nodeport-transition-88kms +Oct 27 15:09:15.428: INFO: Received response from host: affinity-nodeport-transition-88kms +Oct 27 15:09:15.428: INFO: Received response from host: affinity-nodeport-transition-88kms +Oct 27 15:09:15.428: INFO: Received response from host: affinity-nodeport-transition-88kms +Oct 27 15:09:15.428: INFO: Received response from host: affinity-nodeport-transition-88kms +Oct 27 15:09:15.428: INFO: Received response from host: affinity-nodeport-transition-88kms +Oct 27 15:09:15.428: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-8800, will wait for the garbage collector to delete the pods +Oct 27 15:09:15.524: INFO: Deleting ReplicationController affinity-nodeport-transition took: 12.867018ms +Oct 27 15:09:16.025: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 500.199888ms +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:09:18.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-8800" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":346,"completed":257,"skipped":4815,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should be able to update and delete ResourceQuota. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:09:18.983: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename resourcequota +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-7231 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to update and delete ResourceQuota. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a ResourceQuota +STEP: Getting a ResourceQuota +STEP: Updating a ResourceQuota +STEP: Verifying a ResourceQuota was modified +STEP: Deleting a ResourceQuota +STEP: Verifying the deleted ResourceQuota +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:09:19.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-7231" for this suite. +•{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":346,"completed":258,"skipped":4839,"failed":0} +SSSSS +------------------------------ +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition + listing custom resource definition objects works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:09:19.281: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename custom-resource-definition +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in custom-resource-definition-8238 +STEP: Waiting for a default service account to be provisioned in namespace +[It] listing custom resource definition objects works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:09:19.471: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:09:23.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "custom-resource-definition-8238" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":346,"completed":259,"skipped":4844,"failed":0} +SSSS +------------------------------ +[sig-node] KubeletManagedEtcHosts + should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] KubeletManagedEtcHosts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:09:23.455: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-kubelet-etc-hosts-9462 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Setting up the test +STEP: Creating hostNetwork=false pod +Oct 27 15:09:23.829: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:09:25.841: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:09:27.841: INFO: The status of Pod test-pod is Running (Ready = true) +STEP: Creating hostNetwork=true pod +Oct 27 15:09:27.885: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:09:29.898: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:09:31.898: INFO: The status of Pod test-host-network-pod is Running (Ready = true) +STEP: Running the test +STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false +Oct 27 15:09:31.909: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9462 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 15:09:31.909: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 15:09:32.279: INFO: Exec stderr: "" +Oct 27 15:09:32.279: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9462 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 15:09:32.279: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 15:09:32.557: INFO: Exec stderr: "" +Oct 27 15:09:32.557: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9462 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 15:09:32.557: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 15:09:32.774: INFO: Exec stderr: "" +Oct 27 15:09:32.774: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9462 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 15:09:32.774: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 15:09:33.024: INFO: Exec stderr: "" +STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount +Oct 27 15:09:33.024: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9462 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 15:09:33.024: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 15:09:33.330: INFO: Exec stderr: "" +Oct 27 15:09:33.330: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9462 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 15:09:33.330: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 15:09:33.631: INFO: Exec stderr: "" +STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true +Oct 27 15:09:33.631: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9462 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 15:09:33.631: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 15:09:33.926: INFO: Exec stderr: "" +Oct 27 15:09:33.926: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9462 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 15:09:33.926: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 15:09:34.201: INFO: Exec stderr: "" +Oct 27 15:09:34.201: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9462 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 15:09:34.201: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 15:09:34.534: INFO: Exec stderr: "" +Oct 27 15:09:34.534: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9462 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 15:09:34.534: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 15:09:34.749: INFO: Exec stderr: "" +[AfterEach] [sig-node] KubeletManagedEtcHosts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:09:34.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-kubelet-etc-hosts-9462" for this suite. +•{"msg":"PASSED [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":260,"skipped":4848,"failed":0} +SSSSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should verify changes to a daemon set status [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:09:34.784: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename daemonsets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in daemonsets-3493 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:142 +[It] should verify changes to a daemon set status [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating simple DaemonSet "daemon-set" +STEP: Check that daemon pods launch on every node of the cluster. +Oct 27 15:09:35.076: INFO: Number of nodes with available pods: 0 +Oct 27 15:09:35.076: INFO: Node shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5 is running more than one daemon pod +Oct 27 15:09:36.108: INFO: Number of nodes with available pods: 0 +Oct 27 15:09:36.108: INFO: Node shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5 is running more than one daemon pod +Oct 27 15:09:37.109: INFO: Number of nodes with available pods: 1 +Oct 27 15:09:37.109: INFO: Node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc is running more than one daemon pod +Oct 27 15:09:38.108: INFO: Number of nodes with available pods: 2 +Oct 27 15:09:38.108: INFO: Number of running nodes: 2, number of available pods: 2 +STEP: Getting /status +Oct 27 15:09:38.132: INFO: Daemon Set daemon-set has Conditions: [] +STEP: updating the DaemonSet Status +Oct 27 15:09:38.156: INFO: updatedStatus.Conditions: []v1.DaemonSetCondition{v1.DaemonSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"E2E", Message:"Set from e2e test"}} +STEP: watching for the daemon set status to be updated +Oct 27 15:09:38.166: INFO: Observed &DaemonSet event: ADDED +Oct 27 15:09:38.167: INFO: Observed &DaemonSet event: MODIFIED +Oct 27 15:09:38.167: INFO: Observed &DaemonSet event: MODIFIED +Oct 27 15:09:38.167: INFO: Observed &DaemonSet event: MODIFIED +Oct 27 15:09:38.167: INFO: Found daemon set daemon-set in namespace daemonsets-3493 with labels: map[daemonset-name:daemon-set] annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] +Oct 27 15:09:38.167: INFO: Daemon set daemon-set has an updated status +STEP: patching the DaemonSet Status +STEP: watching for the daemon set status to be patched +Oct 27 15:09:38.190: INFO: Observed &DaemonSet event: ADDED +Oct 27 15:09:38.191: INFO: Observed &DaemonSet event: MODIFIED +Oct 27 15:09:38.191: INFO: Observed &DaemonSet event: MODIFIED +Oct 27 15:09:38.191: INFO: Observed &DaemonSet event: MODIFIED +Oct 27 15:09:38.191: INFO: Observed daemon set daemon-set in namespace daemonsets-3493 with annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] +Oct 27 15:09:38.191: INFO: Observed &DaemonSet event: MODIFIED +Oct 27 15:09:38.191: INFO: Found daemon set daemon-set in namespace daemonsets-3493 with labels: map[daemonset-name:daemon-set] annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusPatched True 0001-01-01 00:00:00 +0000 UTC }] +Oct 27 15:09:38.191: INFO: Daemon set daemon-set has a patched status +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:108 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3493, will wait for the garbage collector to delete the pods +Oct 27 15:09:38.278: INFO: Deleting DaemonSet.extensions daemon-set took: 13.114396ms +Oct 27 15:09:38.379: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.831951ms +Oct 27 15:09:40.791: INFO: Number of nodes with available pods: 0 +Oct 27 15:09:40.791: INFO: Number of running nodes: 0, number of available pods: 0 +Oct 27 15:09:40.803: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"33539"},"items":null} + +Oct 27 15:09:40.814: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"33539"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:09:40.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-3493" for this suite. +•{"msg":"PASSED [sig-apps] Daemon set [Serial] should verify changes to a daemon set status [Conformance]","total":346,"completed":261,"skipped":4857,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Secrets + should patch a secret [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:09:40.883: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-9950 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should patch a secret [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a secret +STEP: listing secrets in all namespaces to ensure that there are more than zero +STEP: patching the secret +STEP: deleting the secret using a LabelSelector +STEP: listing secrets in all namespaces, searching for label name and value in patch +[AfterEach] [sig-node] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:09:41.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-9950" for this suite. +•{"msg":"PASSED [sig-node] Secrets should patch a secret [Conformance]","total":346,"completed":262,"skipped":4886,"failed":0} + +------------------------------ +[sig-storage] Projected downwardAPI + should update labels on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:09:41.178: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-989 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should update labels on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating the pod +Oct 27 15:09:41.398: INFO: The status of Pod labelsupdate1b89ad03-ebaf-4494-b226-c8c9a386cf56 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:09:43.412: INFO: The status of Pod labelsupdate1b89ad03-ebaf-4494-b226-c8c9a386cf56 is Running (Ready = true) +Oct 27 15:09:43.971: INFO: Successfully updated pod "labelsupdate1b89ad03-ebaf-4494-b226-c8c9a386cf56" +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:09:46.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-989" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":346,"completed":263,"skipped":4886,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Sysctls [LinuxOnly] [NodeConformance] + should support sysctls [MinimumKubeletVersion:1.21] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:36 +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:09:46.065: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename sysctl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sysctl-5547 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:65 +[It] should support sysctls [MinimumKubeletVersion:1.21] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod with the kernel.shm_rmid_forced sysctl +STEP: Watching for error events or started pod +STEP: Waiting for pod completion +STEP: Checking that the pod succeeded +STEP: Getting logs from the pod +STEP: Checking that the sysctl is actually updated +[AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:09:48.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sysctl-5547" for this suite. +•{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":346,"completed":264,"skipped":4902,"failed":0} +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:09:48.492: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-2942 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating projection with secret that has name projected-secret-test-b3b91a6b-2ce4-4f44-85d9-8e8ac164c44f +STEP: Creating a pod to test consume secrets +Oct 27 15:09:48.719: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-cc9a5a70-c7ef-44d4-b3f8-3a8d73235f49" in namespace "projected-2942" to be "Succeeded or Failed" +Oct 27 15:09:48.730: INFO: Pod "pod-projected-secrets-cc9a5a70-c7ef-44d4-b3f8-3a8d73235f49": Phase="Pending", Reason="", readiness=false. Elapsed: 11.451676ms +Oct 27 15:09:50.743: INFO: Pod "pod-projected-secrets-cc9a5a70-c7ef-44d4-b3f8-3a8d73235f49": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024172914s +Oct 27 15:09:52.756: INFO: Pod "pod-projected-secrets-cc9a5a70-c7ef-44d4-b3f8-3a8d73235f49": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03689866s +STEP: Saw pod success +Oct 27 15:09:52.756: INFO: Pod "pod-projected-secrets-cc9a5a70-c7ef-44d4-b3f8-3a8d73235f49" satisfied condition "Succeeded or Failed" +Oct 27 15:09:52.767: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod pod-projected-secrets-cc9a5a70-c7ef-44d4-b3f8-3a8d73235f49 container projected-secret-volume-test: +STEP: delete the pod +Oct 27 15:09:52.824: INFO: Waiting for pod pod-projected-secrets-cc9a5a70-c7ef-44d4-b3f8-3a8d73235f49 to disappear +Oct 27 15:09:52.835: INFO: Pod pod-projected-secrets-cc9a5a70-c7ef-44d4-b3f8-3a8d73235f49 no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:09:52.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-2942" for this suite. +•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":265,"skipped":4920,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + listing validating webhooks should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:09:52.869: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-4457 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 27 15:09:53.643: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +Oct 27 15:09:55.679: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944193, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944193, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944193, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770944193, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 27 15:09:58.711: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] listing validating webhooks should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Listing all of the created validation webhooks +STEP: Creating a configMap that does not comply to the validation webhook rules +STEP: Deleting the collection of validation webhooks +STEP: Creating a configMap that does not comply to the validation webhook rules +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:09:59.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-4457" for this suite. +STEP: Destroying namespace "webhook-4457-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":346,"completed":266,"skipped":4933,"failed":0} +SSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should run and stop complex daemon [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:09:59.351: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename daemonsets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in daemonsets-8008 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:142 +[It] should run and stop complex daemon [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:09:59.592: INFO: Creating daemon "daemon-set" with a node selector +STEP: Initially, daemon pods should not be running on any nodes. +Oct 27 15:09:59.615: INFO: Number of nodes with available pods: 0 +Oct 27 15:09:59.615: INFO: Number of running nodes: 0, number of available pods: 0 +STEP: Change node label to blue, check that daemon pod is launched. +Oct 27 15:09:59.672: INFO: Number of nodes with available pods: 0 +Oct 27 15:09:59.672: INFO: Node shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5 is running more than one daemon pod +Oct 27 15:10:00.684: INFO: Number of nodes with available pods: 0 +Oct 27 15:10:00.684: INFO: Node shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5 is running more than one daemon pod +Oct 27 15:10:01.684: INFO: Number of nodes with available pods: 0 +Oct 27 15:10:01.684: INFO: Node shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5 is running more than one daemon pod +Oct 27 15:10:02.686: INFO: Number of nodes with available pods: 1 +Oct 27 15:10:02.686: INFO: Number of running nodes: 1, number of available pods: 1 +STEP: Update the node label to green, and wait for daemons to be unscheduled +Oct 27 15:10:02.747: INFO: Number of nodes with available pods: 0 +Oct 27 15:10:02.747: INFO: Number of running nodes: 0, number of available pods: 0 +STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate +Oct 27 15:10:02.773: INFO: Number of nodes with available pods: 0 +Oct 27 15:10:02.773: INFO: Node shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5 is running more than one daemon pod +Oct 27 15:10:03.785: INFO: Number of nodes with available pods: 0 +Oct 27 15:10:03.785: INFO: Node shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5 is running more than one daemon pod +Oct 27 15:10:04.831: INFO: Number of nodes with available pods: 0 +Oct 27 15:10:04.831: INFO: Node shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5 is running more than one daemon pod +Oct 27 15:10:05.786: INFO: Number of nodes with available pods: 0 +Oct 27 15:10:05.786: INFO: Node shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5 is running more than one daemon pod +Oct 27 15:10:06.785: INFO: Number of nodes with available pods: 0 +Oct 27 15:10:06.785: INFO: Node shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5 is running more than one daemon pod +Oct 27 15:10:07.786: INFO: Number of nodes with available pods: 1 +Oct 27 15:10:07.786: INFO: Number of running nodes: 1, number of available pods: 1 +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:108 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8008, will wait for the garbage collector to delete the pods +Oct 27 15:10:07.883: INFO: Deleting DaemonSet.extensions daemon-set took: 13.17368ms +Oct 27 15:10:07.984: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.655575ms +Oct 27 15:10:10.295: INFO: Number of nodes with available pods: 0 +Oct 27 15:10:10.296: INFO: Number of running nodes: 0, number of available pods: 0 +Oct 27 15:10:10.306: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"33896"},"items":null} + +Oct 27 15:10:10.320: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"33896"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:10:10.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-8008" for this suite. +•{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":346,"completed":267,"skipped":4938,"failed":0} +SSSSSSS +------------------------------ +[sig-node] Probing container + should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:10:10.414: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-probe +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-2776 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 +[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating pod test-webserver-6cfee0f2-2849-4bb9-8286-9b00e4afb2c5 in namespace container-probe-2776 +Oct 27 15:10:12.657: INFO: Started pod test-webserver-6cfee0f2-2849-4bb9-8286-9b00e4afb2c5 in namespace container-probe-2776 +STEP: checking the pod's current state and verifying that restartCount is present +Oct 27 15:10:12.669: INFO: Initial restart count of pod test-webserver-6cfee0f2-2849-4bb9-8286-9b00e4afb2c5 is 0 +STEP: deleting the pod +[AfterEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:14:14.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-2776" for this suite. +•{"msg":"PASSED [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":346,"completed":268,"skipped":4945,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-network] Services + should find a service from listing all namespaces [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:14:14.434: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-9222 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should find a service from listing all namespaces [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: fetching services +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:14:14.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-9222" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":346,"completed":269,"skipped":4956,"failed":0} +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] PreStop + should call prestop when killing a pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] PreStop + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:14:14.666: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename prestop +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in prestop-8302 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] PreStop + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:157 +[It] should call prestop when killing a pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating server pod server in namespace prestop-8302 +STEP: Waiting for pods to come up. +STEP: Creating tester pod tester in namespace prestop-8302 +STEP: Deleting pre-stop pod +Oct 27 15:14:26.054: INFO: Saw: { + "Hostname": "server", + "Sent": null, + "Received": { + "prestop": 1 + }, + "Errors": null, + "Log": [ + "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", + "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." + ], + "StillContactingPeers": true +} +STEP: Deleting the server pod +[AfterEach] [sig-node] PreStop + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:14:26.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "prestop-8302" for this suite. +•{"msg":"PASSED [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":346,"completed":270,"skipped":4976,"failed":0} +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:14:26.109: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-5058 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0644 on node default medium +Oct 27 15:14:26.329: INFO: Waiting up to 5m0s for pod "pod-61fa6d6b-e3b2-411b-b0ef-44e231c42c15" in namespace "emptydir-5058" to be "Succeeded or Failed" +Oct 27 15:14:26.341: INFO: Pod "pod-61fa6d6b-e3b2-411b-b0ef-44e231c42c15": Phase="Pending", Reason="", readiness=false. Elapsed: 11.483952ms +Oct 27 15:14:28.354: INFO: Pod "pod-61fa6d6b-e3b2-411b-b0ef-44e231c42c15": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024827875s +Oct 27 15:14:30.367: INFO: Pod "pod-61fa6d6b-e3b2-411b-b0ef-44e231c42c15": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037949008s +STEP: Saw pod success +Oct 27 15:14:30.367: INFO: Pod "pod-61fa6d6b-e3b2-411b-b0ef-44e231c42c15" satisfied condition "Succeeded or Failed" +Oct 27 15:14:30.379: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod pod-61fa6d6b-e3b2-411b-b0ef-44e231c42c15 container test-container: +STEP: delete the pod +Oct 27 15:14:30.422: INFO: Waiting for pod pod-61fa6d6b-e3b2-411b-b0ef-44e231c42c15 to disappear +Oct 27 15:14:30.433: INFO: Pod pod-61fa6d6b-e3b2-411b-b0ef-44e231c42c15 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:14:30.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-5058" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":271,"skipped":4997,"failed":0} +SS +------------------------------ +[sig-node] Variable Expansion + should succeed in writing subpaths in container [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:14:30.467: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename var-expansion +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in var-expansion-2857 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should succeed in writing subpaths in container [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pod +STEP: waiting for pod running +STEP: creating a file in subpath +Oct 27 15:14:34.721: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-2857 PodName:var-expansion-9d549fa6-5669-477a-84c1-dd8fb6c1379d ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 15:14:34.721: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: test for file in mounted path +Oct 27 15:14:34.981: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-2857 PodName:var-expansion-9d549fa6-5669-477a-84c1-dd8fb6c1379d ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 15:14:34.981: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: updating the annotation value +Oct 27 15:14:35.834: INFO: Successfully updated pod "var-expansion-9d549fa6-5669-477a-84c1-dd8fb6c1379d" +STEP: waiting for annotated pod running +STEP: deleting the pod gracefully +Oct 27 15:14:35.847: INFO: Deleting pod "var-expansion-9d549fa6-5669-477a-84c1-dd8fb6c1379d" in namespace "var-expansion-2857" +Oct 27 15:14:35.860: INFO: Wait up to 5m0s for pod "var-expansion-9d549fa6-5669-477a-84c1-dd8fb6c1379d" to be fully deleted +[AfterEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:15:07.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-2857" for this suite. +•{"msg":"PASSED [sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","total":346,"completed":272,"skipped":4999,"failed":0} + +------------------------------ +[sig-network] Ingress API + should support creating Ingress API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Ingress API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:15:07.918: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename ingress +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in ingress-2941 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support creating Ingress API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: getting /apis +STEP: getting /apis/networking.k8s.io +STEP: getting /apis/networking.k8s.iov1 +STEP: creating +STEP: getting +STEP: listing +STEP: watching +Oct 27 15:15:08.217: INFO: starting watch +STEP: cluster-wide listing +STEP: cluster-wide watching +Oct 27 15:15:08.238: INFO: starting watch +STEP: patching +STEP: updating +Oct 27 15:15:08.288: INFO: waiting for watch events with expected annotations +Oct 27 15:15:08.288: INFO: saw patched and updated annotations +STEP: patching /status +STEP: updating /status +STEP: get /status +STEP: deleting +STEP: deleting a collection +[AfterEach] [sig-network] Ingress API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:15:08.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "ingress-2941" for this suite. +•{"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":346,"completed":273,"skipped":4999,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:15:08.425: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-5844 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 27 15:15:08.636: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0ac9bc44-094b-4c44-9c34-0abd736e8b8b" in namespace "projected-5844" to be "Succeeded or Failed" +Oct 27 15:15:08.648: INFO: Pod "downwardapi-volume-0ac9bc44-094b-4c44-9c34-0abd736e8b8b": Phase="Pending", Reason="", readiness=false. Elapsed: 11.893274ms +Oct 27 15:15:10.661: INFO: Pod "downwardapi-volume-0ac9bc44-094b-4c44-9c34-0abd736e8b8b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025206037s +Oct 27 15:15:12.674: INFO: Pod "downwardapi-volume-0ac9bc44-094b-4c44-9c34-0abd736e8b8b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038755158s +STEP: Saw pod success +Oct 27 15:15:12.675: INFO: Pod "downwardapi-volume-0ac9bc44-094b-4c44-9c34-0abd736e8b8b" satisfied condition "Succeeded or Failed" +Oct 27 15:15:12.686: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod downwardapi-volume-0ac9bc44-094b-4c44-9c34-0abd736e8b8b container client-container: +STEP: delete the pod +Oct 27 15:15:12.764: INFO: Waiting for pod downwardapi-volume-0ac9bc44-094b-4c44-9c34-0abd736e8b8b to disappear +Oct 27 15:15:12.775: INFO: Pod downwardapi-volume-0ac9bc44-094b-4c44-9c34-0abd736e8b8b no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:15:12.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-5844" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":346,"completed":274,"skipped":5009,"failed":0} +SSS +------------------------------ +[sig-network] Services + should have session affinity work for NodePort service [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:15:12.809: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-9647 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating service in namespace services-9647 +STEP: creating service affinity-nodeport in namespace services-9647 +STEP: creating replication controller affinity-nodeport in namespace services-9647 +I1027 15:15:13.052176 5683 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-9647, replica count: 3 +I1027 15:15:16.103790 5683 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Oct 27 15:15:16.174: INFO: Creating new exec pod +Oct 27 15:15:19.239: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-9647 exec execpod-affinityqdkl7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80' +Oct 27 15:15:19.860: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport 80\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\n" +Oct 27 15:15:19.860: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 15:15:19.861: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-9647 exec execpod-affinityqdkl7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.65.78.20 80' +Oct 27 15:15:20.252: INFO: stderr: "+ nc -v -t -w 2 100.65.78.20 80\n+ echo hostName\nConnection to 100.65.78.20 80 port [tcp/http] succeeded!\n" +Oct 27 15:15:20.252: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 15:15:20.252: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-9647 exec execpod-affinityqdkl7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.250.0.2 31194' +Oct 27 15:15:20.579: INFO: stderr: "+ nc -v -t -w 2 10.250.0.2 31194\n+ echo hostName\nConnection to 10.250.0.2 31194 port [tcp/*] succeeded!\n" +Oct 27 15:15:20.580: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 15:15:20.580: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-9647 exec execpod-affinityqdkl7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.250.0.3 31194' +Oct 27 15:15:20.930: INFO: stderr: "+ nc -v -t -w 2 10.250.0.3 31194\n+ echo hostName\nConnection to 10.250.0.3 31194 port [tcp/*] succeeded!\n" +Oct 27 15:15:20.930: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 27 15:15:20.930: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-9647 exec execpod-affinityqdkl7 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.250.0.2:31194/ ; done' +Oct 27 15:15:21.406: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:31194/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:31194/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:31194/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:31194/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:31194/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:31194/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:31194/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:31194/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:31194/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:31194/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:31194/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:31194/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:31194/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:31194/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:31194/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.0.2:31194/\n" +Oct 27 15:15:21.406: INFO: stdout: "\naffinity-nodeport-twlcz\naffinity-nodeport-twlcz\naffinity-nodeport-twlcz\naffinity-nodeport-twlcz\naffinity-nodeport-twlcz\naffinity-nodeport-twlcz\naffinity-nodeport-twlcz\naffinity-nodeport-twlcz\naffinity-nodeport-twlcz\naffinity-nodeport-twlcz\naffinity-nodeport-twlcz\naffinity-nodeport-twlcz\naffinity-nodeport-twlcz\naffinity-nodeport-twlcz\naffinity-nodeport-twlcz\naffinity-nodeport-twlcz" +Oct 27 15:15:21.406: INFO: Received response from host: affinity-nodeport-twlcz +Oct 27 15:15:21.406: INFO: Received response from host: affinity-nodeport-twlcz +Oct 27 15:15:21.406: INFO: Received response from host: affinity-nodeport-twlcz +Oct 27 15:15:21.406: INFO: Received response from host: affinity-nodeport-twlcz +Oct 27 15:15:21.406: INFO: Received response from host: affinity-nodeport-twlcz +Oct 27 15:15:21.406: INFO: Received response from host: affinity-nodeport-twlcz +Oct 27 15:15:21.406: INFO: Received response from host: affinity-nodeport-twlcz +Oct 27 15:15:21.406: INFO: Received response from host: affinity-nodeport-twlcz +Oct 27 15:15:21.406: INFO: Received response from host: affinity-nodeport-twlcz +Oct 27 15:15:21.406: INFO: Received response from host: affinity-nodeport-twlcz +Oct 27 15:15:21.406: INFO: Received response from host: affinity-nodeport-twlcz +Oct 27 15:15:21.406: INFO: Received response from host: affinity-nodeport-twlcz +Oct 27 15:15:21.406: INFO: Received response from host: affinity-nodeport-twlcz +Oct 27 15:15:21.406: INFO: Received response from host: affinity-nodeport-twlcz +Oct 27 15:15:21.406: INFO: Received response from host: affinity-nodeport-twlcz +Oct 27 15:15:21.406: INFO: Received response from host: affinity-nodeport-twlcz +Oct 27 15:15:21.406: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-nodeport in namespace services-9647, will wait for the garbage collector to delete the pods +Oct 27 15:15:21.499: INFO: Deleting ReplicationController affinity-nodeport took: 14.020361ms +Oct 27 15:15:21.599: INFO: Terminating ReplicationController affinity-nodeport pods took: 100.198288ms +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:15:24.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-9647" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":346,"completed":275,"skipped":5012,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:15:24.566: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-1374 +STEP: Waiting for a default service account to be provisioned in namespace +[It] updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name configmap-test-upd-5cae3e2f-8af4-443e-bffc-be413121ebfa +STEP: Creating the pod +Oct 27 15:15:24.823: INFO: The status of Pod pod-configmaps-55bf8de0-6a36-40da-8381-5d3202164f5a is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:15:26.836: INFO: The status of Pod pod-configmaps-55bf8de0-6a36-40da-8381-5d3202164f5a is Running (Ready = true) +STEP: Updating configmap configmap-test-upd-5cae3e2f-8af4-443e-bffc-be413121ebfa +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:15:28.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-1374" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":276,"skipped":5025,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:15:29.010: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-4417 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name configmap-test-volume-map-eb897952-8295-4550-b046-996699732b4f +STEP: Creating a pod to test consume configMaps +Oct 27 15:15:29.235: INFO: Waiting up to 5m0s for pod "pod-configmaps-cf4ee480-7769-49e9-b446-c2997ebd6d99" in namespace "configmap-4417" to be "Succeeded or Failed" +Oct 27 15:15:29.246: INFO: Pod "pod-configmaps-cf4ee480-7769-49e9-b446-c2997ebd6d99": Phase="Pending", Reason="", readiness=false. Elapsed: 11.487849ms +Oct 27 15:15:31.259: INFO: Pod "pod-configmaps-cf4ee480-7769-49e9-b446-c2997ebd6d99": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.02409385s +STEP: Saw pod success +Oct 27 15:15:31.259: INFO: Pod "pod-configmaps-cf4ee480-7769-49e9-b446-c2997ebd6d99" satisfied condition "Succeeded or Failed" +Oct 27 15:15:31.271: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod pod-configmaps-cf4ee480-7769-49e9-b446-c2997ebd6d99 container agnhost-container: +STEP: delete the pod +Oct 27 15:15:31.309: INFO: Waiting for pod pod-configmaps-cf4ee480-7769-49e9-b446-c2997ebd6d99 to disappear +Oct 27 15:15:31.320: INFO: Pod pod-configmaps-cf4ee480-7769-49e9-b446-c2997ebd6d99 no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:15:31.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-4417" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":277,"skipped":5041,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide container's memory request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:15:31.354: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-1097 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should provide container's memory request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 27 15:15:31.568: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0163e364-612a-40dc-adf1-d8b706081432" in namespace "projected-1097" to be "Succeeded or Failed" +Oct 27 15:15:31.582: INFO: Pod "downwardapi-volume-0163e364-612a-40dc-adf1-d8b706081432": Phase="Pending", Reason="", readiness=false. Elapsed: 13.331518ms +Oct 27 15:15:33.595: INFO: Pod "downwardapi-volume-0163e364-612a-40dc-adf1-d8b706081432": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026276906s +Oct 27 15:15:35.608: INFO: Pod "downwardapi-volume-0163e364-612a-40dc-adf1-d8b706081432": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039642106s +STEP: Saw pod success +Oct 27 15:15:35.608: INFO: Pod "downwardapi-volume-0163e364-612a-40dc-adf1-d8b706081432" satisfied condition "Succeeded or Failed" +Oct 27 15:15:35.619: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod downwardapi-volume-0163e364-612a-40dc-adf1-d8b706081432 container client-container: +STEP: delete the pod +Oct 27 15:15:35.656: INFO: Waiting for pod downwardapi-volume-0163e364-612a-40dc-adf1-d8b706081432 to disappear +Oct 27 15:15:35.666: INFO: Pod downwardapi-volume-0163e364-612a-40dc-adf1-d8b706081432 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:15:35.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-1097" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":346,"completed":278,"skipped":5056,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] CronJob + should not schedule jobs when suspended [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:15:35.699: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename cronjob +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in cronjob-3205 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not schedule jobs when suspended [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a suspended cronjob +STEP: Ensuring no jobs are scheduled +STEP: Ensuring no job exists by listing jobs explicitly +STEP: Removing cronjob +[AfterEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:20:36.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "cronjob-3205" for this suite. + +• [SLOW TEST:300.557 seconds] +[sig-apps] CronJob +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should not schedule jobs when suspended [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","total":346,"completed":279,"skipped":5103,"failed":0} +SS +------------------------------ +[sig-scheduling] SchedulerPredicates [Serial] + validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:20:36.256: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename sched-pred +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-pred-5666 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 +Oct 27 15:20:36.450: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready +Oct 27 15:20:36.474: INFO: Waiting for terminating namespaces to be deleted... +Oct 27 15:20:36.487: INFO: +Logging pods the apiserver thinks is on node shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5 before test +Oct 27 15:20:36.512: INFO: addons-nginx-ingress-controller-d5756fc97-k8kst from kube-system started at 2021-10-27 14:37:29 +0000 UTC (1 container statuses recorded) +Oct 27 15:20:36.512: INFO: Container nginx-ingress-controller ready: true, restart count 0 +Oct 27 15:20:36.512: INFO: addons-nginx-ingress-nginx-ingress-k8s-backend-56d9d84c8c-vv84b from kube-system started at 2021-10-27 13:56:22 +0000 UTC (1 container statuses recorded) +Oct 27 15:20:36.512: INFO: Container nginx-ingress-nginx-ingress-k8s-backend ready: true, restart count 0 +Oct 27 15:20:36.512: INFO: apiserver-proxy-sl296 from kube-system started at 2021-10-27 13:56:02 +0000 UTC (2 container statuses recorded) +Oct 27 15:20:36.512: INFO: Container proxy ready: true, restart count 0 +Oct 27 15:20:36.512: INFO: Container sidecar ready: true, restart count 0 +Oct 27 15:20:36.512: INFO: calico-node-4h2tf from kube-system started at 2021-10-27 13:58:05 +0000 UTC (1 container statuses recorded) +Oct 27 15:20:36.512: INFO: Container calico-node ready: true, restart count 0 +Oct 27 15:20:36.512: INFO: calico-node-vertical-autoscaler-785b5f968-9qxv8 from kube-system started at 2021-10-27 13:56:22 +0000 UTC (1 container statuses recorded) +Oct 27 15:20:36.512: INFO: Container autoscaler ready: true, restart count 0 +Oct 27 15:20:36.512: INFO: calico-typha-horizontal-autoscaler-5b58bb446c-s7nwv from kube-system started at 2021-10-27 13:56:22 +0000 UTC (1 container statuses recorded) +Oct 27 15:20:36.512: INFO: Container autoscaler ready: true, restart count 0 +Oct 27 15:20:36.512: INFO: calico-typha-vertical-autoscaler-5c9655cddd-qxmpq from kube-system started at 2021-10-27 13:56:22 +0000 UTC (1 container statuses recorded) +Oct 27 15:20:36.513: INFO: Container autoscaler ready: true, restart count 0 +Oct 27 15:20:36.513: INFO: coredns-6944b5cf58-cqcmx from kube-system started at 2021-10-27 13:56:22 +0000 UTC (1 container statuses recorded) +Oct 27 15:20:36.513: INFO: Container coredns ready: true, restart count 0 +Oct 27 15:20:36.513: INFO: coredns-6944b5cf58-qwp9p from kube-system started at 2021-10-27 13:56:22 +0000 UTC (1 container statuses recorded) +Oct 27 15:20:36.513: INFO: Container coredns ready: true, restart count 0 +Oct 27 15:20:36.513: INFO: csi-driver-node-l4n7m from kube-system started at 2021-10-27 13:56:02 +0000 UTC (3 container statuses recorded) +Oct 27 15:20:36.513: INFO: Container csi-driver ready: true, restart count 0 +Oct 27 15:20:36.513: INFO: Container csi-liveness-probe ready: true, restart count 0 +Oct 27 15:20:36.513: INFO: Container csi-node-driver-registrar ready: true, restart count 0 +Oct 27 15:20:36.513: INFO: kube-proxy-4k6j5 from kube-system started at 2021-10-27 14:45:36 +0000 UTC (2 container statuses recorded) +Oct 27 15:20:36.513: INFO: Container conntrack-fix ready: true, restart count 0 +Oct 27 15:20:36.513: INFO: Container kube-proxy ready: true, restart count 0 +Oct 27 15:20:36.513: INFO: metrics-server-6b8fdcd747-t4xbj from kube-system started at 2021-10-27 13:56:22 +0000 UTC (1 container statuses recorded) +Oct 27 15:20:36.513: INFO: Container metrics-server ready: true, restart count 0 +Oct 27 15:20:36.513: INFO: node-exporter-cwjxv from kube-system started at 2021-10-27 13:56:02 +0000 UTC (1 container statuses recorded) +Oct 27 15:20:36.513: INFO: Container node-exporter ready: true, restart count 0 +Oct 27 15:20:36.513: INFO: node-problem-detector-g5rmr from kube-system started at 2021-10-27 14:24:37 +0000 UTC (1 container statuses recorded) +Oct 27 15:20:36.513: INFO: Container node-problem-detector ready: true, restart count 0 +Oct 27 15:20:36.513: INFO: vpn-shoot-77b49d5987-8ddn6 from kube-system started at 2021-10-27 13:56:22 +0000 UTC (1 container statuses recorded) +Oct 27 15:20:36.513: INFO: Container vpn-shoot ready: true, restart count 0 +Oct 27 15:20:36.513: INFO: dashboard-metrics-scraper-7ccbfc448f-l8nhq from kubernetes-dashboard started at 2021-10-27 13:56:22 +0000 UTC (1 container statuses recorded) +Oct 27 15:20:36.513: INFO: Container dashboard-metrics-scraper ready: true, restart count 0 +Oct 27 15:20:36.513: INFO: kubernetes-dashboard-7888b55b49-xptfd from kubernetes-dashboard started at 2021-10-27 13:56:22 +0000 UTC (1 container statuses recorded) +Oct 27 15:20:36.513: INFO: Container kubernetes-dashboard ready: true, restart count 2 +Oct 27 15:20:36.513: INFO: +Logging pods the apiserver thinks is on node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc before test +Oct 27 15:20:36.534: INFO: apiserver-proxy-z9z6b from kube-system started at 2021-10-27 13:56:05 +0000 UTC (2 container statuses recorded) +Oct 27 15:20:36.534: INFO: Container proxy ready: true, restart count 0 +Oct 27 15:20:36.534: INFO: Container sidecar ready: true, restart count 0 +Oct 27 15:20:36.534: INFO: blackbox-exporter-65c549b94c-rjgf7 from kube-system started at 2021-10-27 14:03:35 +0000 UTC (1 container statuses recorded) +Oct 27 15:20:36.534: INFO: Container blackbox-exporter ready: true, restart count 0 +Oct 27 15:20:36.534: INFO: calico-kube-controllers-56bcbfb5c5-f9t75 from kube-system started at 2021-10-27 13:56:06 +0000 UTC (1 container statuses recorded) +Oct 27 15:20:36.534: INFO: Container calico-kube-controllers ready: true, restart count 0 +Oct 27 15:20:36.534: INFO: calico-node-7gp7f from kube-system started at 2021-10-27 13:56:05 +0000 UTC (1 container statuses recorded) +Oct 27 15:20:36.534: INFO: Container calico-node ready: true, restart count 0 +Oct 27 15:20:36.534: INFO: calico-typha-deploy-546b97d4b5-z8pql from kube-system started at 2021-10-27 13:56:06 +0000 UTC (1 container statuses recorded) +Oct 27 15:20:36.534: INFO: Container calico-typha ready: true, restart count 0 +Oct 27 15:20:36.534: INFO: csi-driver-node-4sm4p from kube-system started at 2021-10-27 13:56:05 +0000 UTC (3 container statuses recorded) +Oct 27 15:20:36.534: INFO: Container csi-driver ready: true, restart count 0 +Oct 27 15:20:36.534: INFO: Container csi-liveness-probe ready: true, restart count 0 +Oct 27 15:20:36.534: INFO: Container csi-node-driver-registrar ready: true, restart count 0 +Oct 27 15:20:36.534: INFO: kube-proxy-g7ktr from kube-system started at 2021-10-27 14:45:36 +0000 UTC (2 container statuses recorded) +Oct 27 15:20:36.534: INFO: Container conntrack-fix ready: true, restart count 0 +Oct 27 15:20:36.534: INFO: Container kube-proxy ready: true, restart count 0 +Oct 27 15:20:36.534: INFO: node-exporter-zsjq5 from kube-system started at 2021-10-27 13:56:05 +0000 UTC (1 container statuses recorded) +Oct 27 15:20:36.534: INFO: Container node-exporter ready: true, restart count 0 +Oct 27 15:20:36.534: INFO: node-problem-detector-9pkv8 from kube-system started at 2021-10-27 14:24:37 +0000 UTC (1 container statuses recorded) +Oct 27 15:20:36.534: INFO: Container node-problem-detector ready: true, restart count 0 +[It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Trying to launch a pod without a label to get a node which can launch it. +STEP: Explicitly delete pod here to free the resource it takes. +STEP: Trying to apply a random label on the found node. +STEP: verifying the node has the label kubernetes.io/e2e-07302cb0-712b-4803-bbe8-67b4cc2cc085 95 +STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled +STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 10.250.0.3 on the node which pod4 resides and expect not scheduled +STEP: removing the label kubernetes.io/e2e-07302cb0-712b-4803-bbe8-67b4cc2cc085 off the node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc +STEP: verifying the node doesn't have the label kubernetes.io/e2e-07302cb0-712b-4803-bbe8-67b4cc2cc085 +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:25:44.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-pred-5666" for this suite. +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 + +• [SLOW TEST:308.585 seconds] +[sig-scheduling] SchedulerPredicates [Serial] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 + validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":346,"completed":280,"skipped":5105,"failed":0} +S +------------------------------ +[sig-storage] Downward API volume + should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:25:44.841: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-9237 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 27 15:25:45.071: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8542f2cb-6d81-460c-8e55-be02de4bd149" in namespace "downward-api-9237" to be "Succeeded or Failed" +Oct 27 15:25:45.091: INFO: Pod "downwardapi-volume-8542f2cb-6d81-460c-8e55-be02de4bd149": Phase="Pending", Reason="", readiness=false. Elapsed: 19.924231ms +Oct 27 15:25:47.102: INFO: Pod "downwardapi-volume-8542f2cb-6d81-460c-8e55-be02de4bd149": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.031303813s +STEP: Saw pod success +Oct 27 15:25:47.103: INFO: Pod "downwardapi-volume-8542f2cb-6d81-460c-8e55-be02de4bd149" satisfied condition "Succeeded or Failed" +Oct 27 15:25:47.114: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod downwardapi-volume-8542f2cb-6d81-460c-8e55-be02de4bd149 container client-container: +STEP: delete the pod +Oct 27 15:25:47.168: INFO: Waiting for pod downwardapi-volume-8542f2cb-6d81-460c-8e55-be02de4bd149 to disappear +Oct 27 15:25:47.179: INFO: Pod downwardapi-volume-8542f2cb-6d81-460c-8e55-be02de4bd149 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:25:47.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-9237" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":281,"skipped":5106,"failed":0} +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicaSet + should serve a basic image on each replica with a public image [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:25:47.216: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename replicaset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replicaset-4206 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should serve a basic image on each replica with a public image [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:25:47.405: INFO: Creating ReplicaSet my-hostname-basic-74a00e09-79cd-4b8f-9283-7050e9b03e2c +Oct 27 15:25:47.430: INFO: Pod name my-hostname-basic-74a00e09-79cd-4b8f-9283-7050e9b03e2c: Found 0 pods out of 1 +Oct 27 15:25:52.443: INFO: Pod name my-hostname-basic-74a00e09-79cd-4b8f-9283-7050e9b03e2c: Found 1 pods out of 1 +Oct 27 15:25:52.443: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-74a00e09-79cd-4b8f-9283-7050e9b03e2c" is running +Oct 27 15:25:52.456: INFO: Pod "my-hostname-basic-74a00e09-79cd-4b8f-9283-7050e9b03e2c-qjld9" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-27 15:25:47 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-27 15:25:49 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-27 15:25:49 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-27 15:25:47 +0000 UTC Reason: Message:}]) +Oct 27 15:25:52.456: INFO: Trying to dial the pod +Oct 27 15:25:57.546: INFO: Controller my-hostname-basic-74a00e09-79cd-4b8f-9283-7050e9b03e2c: Got expected result from replica 1 [my-hostname-basic-74a00e09-79cd-4b8f-9283-7050e9b03e2c-qjld9]: "my-hostname-basic-74a00e09-79cd-4b8f-9283-7050e9b03e2c-qjld9", 1 of 1 required successes so far +[AfterEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:25:57.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replicaset-4206" for this suite. +•{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":346,"completed":282,"skipped":5127,"failed":0} +S +------------------------------ +[sig-storage] EmptyDir wrapper volumes + should not conflict [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir wrapper volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:25:57.580: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir-wrapper +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-wrapper-7520 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not conflict [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:25:57.829: INFO: The status of Pod pod-secrets-bdcb9e04-e60b-429f-b5ae-b44002db7001 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:25:59.841: INFO: The status of Pod pod-secrets-bdcb9e04-e60b-429f-b5ae-b44002db7001 is Running (Ready = true) +STEP: Cleaning up the secret +STEP: Cleaning up the configmap +STEP: Cleaning up the pod +[AfterEach] [sig-storage] EmptyDir wrapper volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:25:59.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-wrapper-7520" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":346,"completed":283,"skipped":5128,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] DNS + should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:25:59.971: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename dns +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-8261 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8261.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-8261.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8261.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8261.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-8261.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8261.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done + +STEP: creating a pod to probe /etc/hosts +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Oct 27 15:26:02.444: INFO: DNS probes using dns-8261/dns-test-3ee578ad-6744-4ca7-8dbb-58ffb7976c55 succeeded + +STEP: deleting the pod +[AfterEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:26:02.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-8261" for this suite. +•{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":346,"completed":284,"skipped":5186,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] HostPort + validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] HostPort + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:26:02.609: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename hostport +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in hostport-7312 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] HostPort + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/hostport.go:47 +[It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Trying to create a pod(pod1) with hostport 54323 and hostIP 127.0.0.1 and expect scheduled +Oct 27 15:26:02.931: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:26:04.943: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:26:06.942: INFO: The status of Pod pod1 is Running (Ready = true) +STEP: Trying to create another pod(pod2) with hostport 54323 but hostIP 10.250.0.3 on the node which pod1 resides and expect scheduled +Oct 27 15:26:06.972: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:26:09.030: INFO: The status of Pod pod2 is Running (Ready = true) +STEP: Trying to create a third pod(pod3) with hostport 54323, hostIP 10.250.0.3 but use UDP protocol on the node which pod2 resides +Oct 27 15:26:09.535: INFO: The status of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:26:11.549: INFO: The status of Pod pod3 is Running (Ready = true) +Oct 27 15:26:11.577: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:26:13.589: INFO: The status of Pod e2e-host-exec is Running (Ready = true) +STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54323 +Oct 27 15:26:13.601: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 10.250.0.3 http://127.0.0.1:54323/hostname] Namespace:hostport-7312 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 15:26:13.601: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: checking connectivity from pod e2e-host-exec to serverIP: 10.250.0.3, port: 54323 +Oct 27 15:26:13.947: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://10.250.0.3:54323/hostname] Namespace:hostport-7312 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 15:26:13.947: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: checking connectivity from pod e2e-host-exec to serverIP: 10.250.0.3, port: 54323 UDP +Oct 27 15:26:14.303: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 10.250.0.3 54323] Namespace:hostport-7312 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 27 15:26:14.303: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +[AfterEach] [sig-network] HostPort + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:26:19.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "hostport-7312" for this suite. +•{"msg":"PASSED [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]","total":346,"completed":285,"skipped":5211,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Watchers + should receive events on concurrent watches in same order [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:26:19.638: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename watch +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in watch-6898 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should receive events on concurrent watches in same order [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: getting a starting resourceVersion +STEP: starting a background goroutine to produce watch events +STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order +[AfterEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:26:24.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "watch-6898" for this suite. +•{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":346,"completed":286,"skipped":5224,"failed":0} +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Discovery + should validate PreferredVersion for each APIGroup [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Discovery + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:26:24.829: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename discovery +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in discovery-9077 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] Discovery + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/discovery.go:39 +STEP: Setting up server cert +[It] should validate PreferredVersion for each APIGroup [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:26:26.150: INFO: Checking APIGroup: apiregistration.k8s.io +Oct 27 15:26:26.160: INFO: PreferredVersion.GroupVersion: apiregistration.k8s.io/v1 +Oct 27 15:26:26.160: INFO: Versions found [{apiregistration.k8s.io/v1 v1}] +Oct 27 15:26:26.160: INFO: apiregistration.k8s.io/v1 matches apiregistration.k8s.io/v1 +Oct 27 15:26:26.160: INFO: Checking APIGroup: apps +Oct 27 15:26:26.170: INFO: PreferredVersion.GroupVersion: apps/v1 +Oct 27 15:26:26.170: INFO: Versions found [{apps/v1 v1}] +Oct 27 15:26:26.170: INFO: apps/v1 matches apps/v1 +Oct 27 15:26:26.170: INFO: Checking APIGroup: events.k8s.io +Oct 27 15:26:26.180: INFO: PreferredVersion.GroupVersion: events.k8s.io/v1 +Oct 27 15:26:26.180: INFO: Versions found [{events.k8s.io/v1 v1} {events.k8s.io/v1beta1 v1beta1}] +Oct 27 15:26:26.180: INFO: events.k8s.io/v1 matches events.k8s.io/v1 +Oct 27 15:26:26.180: INFO: Checking APIGroup: authentication.k8s.io +Oct 27 15:26:26.189: INFO: PreferredVersion.GroupVersion: authentication.k8s.io/v1 +Oct 27 15:26:26.189: INFO: Versions found [{authentication.k8s.io/v1 v1}] +Oct 27 15:26:26.190: INFO: authentication.k8s.io/v1 matches authentication.k8s.io/v1 +Oct 27 15:26:26.190: INFO: Checking APIGroup: authorization.k8s.io +Oct 27 15:26:26.199: INFO: PreferredVersion.GroupVersion: authorization.k8s.io/v1 +Oct 27 15:26:26.199: INFO: Versions found [{authorization.k8s.io/v1 v1}] +Oct 27 15:26:26.199: INFO: authorization.k8s.io/v1 matches authorization.k8s.io/v1 +Oct 27 15:26:26.200: INFO: Checking APIGroup: autoscaling +Oct 27 15:26:26.210: INFO: PreferredVersion.GroupVersion: autoscaling/v1 +Oct 27 15:26:26.210: INFO: Versions found [{autoscaling/v1 v1} {autoscaling/v2beta1 v2beta1} {autoscaling/v2beta2 v2beta2}] +Oct 27 15:26:26.210: INFO: autoscaling/v1 matches autoscaling/v1 +Oct 27 15:26:26.210: INFO: Checking APIGroup: batch +Oct 27 15:26:26.219: INFO: PreferredVersion.GroupVersion: batch/v1 +Oct 27 15:26:26.219: INFO: Versions found [{batch/v1 v1} {batch/v1beta1 v1beta1}] +Oct 27 15:26:26.219: INFO: batch/v1 matches batch/v1 +Oct 27 15:26:26.219: INFO: Checking APIGroup: certificates.k8s.io +Oct 27 15:26:26.229: INFO: PreferredVersion.GroupVersion: certificates.k8s.io/v1 +Oct 27 15:26:26.229: INFO: Versions found [{certificates.k8s.io/v1 v1}] +Oct 27 15:26:26.229: INFO: certificates.k8s.io/v1 matches certificates.k8s.io/v1 +Oct 27 15:26:26.229: INFO: Checking APIGroup: networking.k8s.io +Oct 27 15:26:26.239: INFO: PreferredVersion.GroupVersion: networking.k8s.io/v1 +Oct 27 15:26:26.240: INFO: Versions found [{networking.k8s.io/v1 v1}] +Oct 27 15:26:26.240: INFO: networking.k8s.io/v1 matches networking.k8s.io/v1 +Oct 27 15:26:26.240: INFO: Checking APIGroup: policy +Oct 27 15:26:26.249: INFO: PreferredVersion.GroupVersion: policy/v1 +Oct 27 15:26:26.249: INFO: Versions found [{policy/v1 v1} {policy/v1beta1 v1beta1}] +Oct 27 15:26:26.249: INFO: policy/v1 matches policy/v1 +Oct 27 15:26:26.249: INFO: Checking APIGroup: rbac.authorization.k8s.io +Oct 27 15:26:26.259: INFO: PreferredVersion.GroupVersion: rbac.authorization.k8s.io/v1 +Oct 27 15:26:26.259: INFO: Versions found [{rbac.authorization.k8s.io/v1 v1}] +Oct 27 15:26:26.259: INFO: rbac.authorization.k8s.io/v1 matches rbac.authorization.k8s.io/v1 +Oct 27 15:26:26.259: INFO: Checking APIGroup: storage.k8s.io +Oct 27 15:26:26.269: INFO: PreferredVersion.GroupVersion: storage.k8s.io/v1 +Oct 27 15:26:26.269: INFO: Versions found [{storage.k8s.io/v1 v1} {storage.k8s.io/v1beta1 v1beta1}] +Oct 27 15:26:26.269: INFO: storage.k8s.io/v1 matches storage.k8s.io/v1 +Oct 27 15:26:26.269: INFO: Checking APIGroup: admissionregistration.k8s.io +Oct 27 15:26:26.279: INFO: PreferredVersion.GroupVersion: admissionregistration.k8s.io/v1 +Oct 27 15:26:26.279: INFO: Versions found [{admissionregistration.k8s.io/v1 v1}] +Oct 27 15:26:26.279: INFO: admissionregistration.k8s.io/v1 matches admissionregistration.k8s.io/v1 +Oct 27 15:26:26.279: INFO: Checking APIGroup: apiextensions.k8s.io +Oct 27 15:26:26.289: INFO: PreferredVersion.GroupVersion: apiextensions.k8s.io/v1 +Oct 27 15:26:26.289: INFO: Versions found [{apiextensions.k8s.io/v1 v1}] +Oct 27 15:26:26.289: INFO: apiextensions.k8s.io/v1 matches apiextensions.k8s.io/v1 +Oct 27 15:26:26.289: INFO: Checking APIGroup: scheduling.k8s.io +Oct 27 15:26:26.298: INFO: PreferredVersion.GroupVersion: scheduling.k8s.io/v1 +Oct 27 15:26:26.299: INFO: Versions found [{scheduling.k8s.io/v1 v1}] +Oct 27 15:26:26.299: INFO: scheduling.k8s.io/v1 matches scheduling.k8s.io/v1 +Oct 27 15:26:26.299: INFO: Checking APIGroup: coordination.k8s.io +Oct 27 15:26:26.309: INFO: PreferredVersion.GroupVersion: coordination.k8s.io/v1 +Oct 27 15:26:26.309: INFO: Versions found [{coordination.k8s.io/v1 v1}] +Oct 27 15:26:26.309: INFO: coordination.k8s.io/v1 matches coordination.k8s.io/v1 +Oct 27 15:26:26.309: INFO: Checking APIGroup: node.k8s.io +Oct 27 15:26:26.318: INFO: PreferredVersion.GroupVersion: node.k8s.io/v1 +Oct 27 15:26:26.318: INFO: Versions found [{node.k8s.io/v1 v1} {node.k8s.io/v1beta1 v1beta1}] +Oct 27 15:26:26.318: INFO: node.k8s.io/v1 matches node.k8s.io/v1 +Oct 27 15:26:26.318: INFO: Checking APIGroup: discovery.k8s.io +Oct 27 15:26:26.328: INFO: PreferredVersion.GroupVersion: discovery.k8s.io/v1 +Oct 27 15:26:26.328: INFO: Versions found [{discovery.k8s.io/v1 v1} {discovery.k8s.io/v1beta1 v1beta1}] +Oct 27 15:26:26.328: INFO: discovery.k8s.io/v1 matches discovery.k8s.io/v1 +Oct 27 15:26:26.328: INFO: Checking APIGroup: flowcontrol.apiserver.k8s.io +Oct 27 15:26:26.339: INFO: PreferredVersion.GroupVersion: flowcontrol.apiserver.k8s.io/v1beta1 +Oct 27 15:26:26.339: INFO: Versions found [{flowcontrol.apiserver.k8s.io/v1beta1 v1beta1}] +Oct 27 15:26:26.339: INFO: flowcontrol.apiserver.k8s.io/v1beta1 matches flowcontrol.apiserver.k8s.io/v1beta1 +Oct 27 15:26:26.339: INFO: Checking APIGroup: autoscaling.k8s.io +Oct 27 15:26:26.349: INFO: PreferredVersion.GroupVersion: autoscaling.k8s.io/v1 +Oct 27 15:26:26.349: INFO: Versions found [{autoscaling.k8s.io/v1 v1} {autoscaling.k8s.io/v1beta2 v1beta2}] +Oct 27 15:26:26.349: INFO: autoscaling.k8s.io/v1 matches autoscaling.k8s.io/v1 +Oct 27 15:26:26.349: INFO: Checking APIGroup: crd.projectcalico.org +Oct 27 15:26:26.358: INFO: PreferredVersion.GroupVersion: crd.projectcalico.org/v1 +Oct 27 15:26:26.358: INFO: Versions found [{crd.projectcalico.org/v1 v1}] +Oct 27 15:26:26.358: INFO: crd.projectcalico.org/v1 matches crd.projectcalico.org/v1 +Oct 27 15:26:26.358: INFO: Checking APIGroup: cert.gardener.cloud +Oct 27 15:26:26.368: INFO: PreferredVersion.GroupVersion: cert.gardener.cloud/v1alpha1 +Oct 27 15:26:26.368: INFO: Versions found [{cert.gardener.cloud/v1alpha1 v1alpha1}] +Oct 27 15:26:26.368: INFO: cert.gardener.cloud/v1alpha1 matches cert.gardener.cloud/v1alpha1 +Oct 27 15:26:26.368: INFO: Checking APIGroup: dns.gardener.cloud +Oct 27 15:26:26.378: INFO: PreferredVersion.GroupVersion: dns.gardener.cloud/v1alpha1 +Oct 27 15:26:26.378: INFO: Versions found [{dns.gardener.cloud/v1alpha1 v1alpha1}] +Oct 27 15:26:26.378: INFO: dns.gardener.cloud/v1alpha1 matches dns.gardener.cloud/v1alpha1 +Oct 27 15:26:26.378: INFO: Checking APIGroup: snapshot.storage.k8s.io +Oct 27 15:26:26.388: INFO: PreferredVersion.GroupVersion: snapshot.storage.k8s.io/v1beta1 +Oct 27 15:26:26.388: INFO: Versions found [{snapshot.storage.k8s.io/v1beta1 v1beta1}] +Oct 27 15:26:26.388: INFO: snapshot.storage.k8s.io/v1beta1 matches snapshot.storage.k8s.io/v1beta1 +Oct 27 15:26:26.388: INFO: Checking APIGroup: metrics.k8s.io +Oct 27 15:26:26.398: INFO: PreferredVersion.GroupVersion: metrics.k8s.io/v1beta1 +Oct 27 15:26:26.398: INFO: Versions found [{metrics.k8s.io/v1beta1 v1beta1}] +Oct 27 15:26:26.398: INFO: metrics.k8s.io/v1beta1 matches metrics.k8s.io/v1beta1 +[AfterEach] [sig-api-machinery] Discovery + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:26:26.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "discovery-9077" for this suite. +•{"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":346,"completed":287,"skipped":5245,"failed":0} +SSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:26:26.432: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-5379 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name configmap-test-volume-map-9a34ca9e-8f2f-4ade-82da-1b4ecc03f6e7 +STEP: Creating a pod to test consume configMaps +Oct 27 15:26:26.657: INFO: Waiting up to 5m0s for pod "pod-configmaps-b01dfae4-79f8-4f80-8d4c-f61f7e2953b2" in namespace "configmap-5379" to be "Succeeded or Failed" +Oct 27 15:26:26.668: INFO: Pod "pod-configmaps-b01dfae4-79f8-4f80-8d4c-f61f7e2953b2": Phase="Pending", Reason="", readiness=false. Elapsed: 11.023413ms +Oct 27 15:26:28.680: INFO: Pod "pod-configmaps-b01dfae4-79f8-4f80-8d4c-f61f7e2953b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.023049899s +STEP: Saw pod success +Oct 27 15:26:28.680: INFO: Pod "pod-configmaps-b01dfae4-79f8-4f80-8d4c-f61f7e2953b2" satisfied condition "Succeeded or Failed" +Oct 27 15:26:28.691: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod pod-configmaps-b01dfae4-79f8-4f80-8d4c-f61f7e2953b2 container agnhost-container: +STEP: delete the pod +Oct 27 15:26:28.728: INFO: Waiting for pod pod-configmaps-b01dfae4-79f8-4f80-8d4c-f61f7e2953b2 to disappear +Oct 27 15:26:28.739: INFO: Pod pod-configmaps-b01dfae4-79f8-4f80-8d4c-f61f7e2953b2 no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:26:28.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-5379" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":346,"completed":288,"skipped":5248,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] DisruptionController + should observe PodDisruptionBudget status updated [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:26:28.773: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename disruption +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in disruption-232 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 +[It] should observe PodDisruptionBudget status updated [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Waiting for the pdb to be processed +STEP: Waiting for all pods to be running +Oct 27 15:26:29.049: INFO: running pods: 0 < 3 +Oct 27 15:26:31.067: INFO: running pods: 0 < 3 +[AfterEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:26:33.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "disruption-232" for this suite. +•{"msg":"PASSED [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]","total":346,"completed":289,"skipped":5324,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-apps] Deployment + deployment should support rollover [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:26:33.109: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename deployment +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in deployment-5214 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 +[It] deployment should support rollover [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:26:33.335: INFO: Pod name rollover-pod: Found 0 pods out of 1 +Oct 27 15:26:38.347: INFO: Pod name rollover-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running +Oct 27 15:26:38.347: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready +Oct 27 15:26:40.360: INFO: Creating deployment "test-rollover-deployment" +Oct 27 15:26:40.384: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations +Oct 27 15:26:42.410: INFO: Check revision of new replica set for deployment "test-rollover-deployment" +Oct 27 15:26:42.436: INFO: Ensure that both replica sets have 1 created replica +Oct 27 15:26:42.460: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update +Oct 27 15:26:42.487: INFO: Updating deployment test-rollover-deployment +Oct 27 15:26:42.487: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller +Oct 27 15:26:44.511: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 +Oct 27 15:26:44.536: INFO: Make sure deployment "test-rollover-deployment" is complete +Oct 27 15:26:44.560: INFO: all replica sets need to contain the pod-template-hash label +Oct 27 15:26:44.560: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770945200, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770945200, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770945202, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770945200, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 27 15:26:46.585: INFO: all replica sets need to contain the pod-template-hash label +Oct 27 15:26:46.586: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770945200, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770945200, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770945204, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770945200, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 27 15:26:48.585: INFO: all replica sets need to contain the pod-template-hash label +Oct 27 15:26:48.585: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770945200, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770945200, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770945204, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770945200, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 27 15:26:50.585: INFO: all replica sets need to contain the pod-template-hash label +Oct 27 15:26:50.585: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770945200, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770945200, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770945204, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770945200, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 27 15:26:52.585: INFO: all replica sets need to contain the pod-template-hash label +Oct 27 15:26:52.586: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770945200, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770945200, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770945204, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770945200, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 27 15:26:54.586: INFO: all replica sets need to contain the pod-template-hash label +Oct 27 15:26:54.586: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770945200, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770945200, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770945204, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770945200, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 27 15:26:56.586: INFO: +Oct 27 15:26:56.586: INFO: Ensure that both old replica sets have no replicas +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 +Oct 27 15:26:56.621: INFO: Deployment "test-rollover-deployment": +&Deployment{ObjectMeta:{test-rollover-deployment deployment-5214 af718c60-58d6-4909-b929-765e8d3ec8b5 39309 2 2021-10-27 15:26:40 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-10-27 15:26:42 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 15:26:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc001c6ce28 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-10-27 15:26:40 +0000 UTC,LastTransitionTime:2021-10-27 15:26:40 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-98c5f4599" has successfully progressed.,LastUpdateTime:2021-10-27 15:26:54 +0000 UTC,LastTransitionTime:2021-10-27 15:26:40 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} + +Oct 27 15:26:56.633: INFO: New ReplicaSet "test-rollover-deployment-98c5f4599" of Deployment "test-rollover-deployment": +&ReplicaSet{ObjectMeta:{test-rollover-deployment-98c5f4599 deployment-5214 d1a7d1b1-e064-43f6-a571-0d52ecd9ea15 39302 2 2021-10-27 15:26:42 +0000 UTC map[name:rollover-pod pod-template-hash:98c5f4599] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment af718c60-58d6-4909-b929-765e8d3ec8b5 0xc001c6d7c0 0xc001c6d7c1}] [] [{kube-controller-manager Update apps/v1 2021-10-27 15:26:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"af718c60-58d6-4909-b929-765e8d3ec8b5\"}":{}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 15:26:54 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 98c5f4599,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:98c5f4599] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc001c6d858 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} +Oct 27 15:26:56.633: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": +Oct 27 15:26:56.633: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-5214 f3c8af2e-7d76-4c3f-9c9b-f6d5f16bcb77 39308 2 2021-10-27 15:26:33 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment af718c60-58d6-4909-b929-765e8d3ec8b5 0xc001c6d3a7 0xc001c6d3a8}] [] [{e2e.test Update apps/v1 2021-10-27 15:26:33 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 15:26:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"af718c60-58d6-4909-b929-765e8d3ec8b5\"}":{}}},"f:spec":{"f:replicas":{}}} } {kube-controller-manager Update apps/v1 2021-10-27 15:26:54 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc001c6d4e8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Oct 27 15:26:56.633: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-78bc8b888c deployment-5214 4d6e68c7-19ef-4837-8f46-8eb2b6b8d13a 39245 2 2021-10-27 15:26:40 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment af718c60-58d6-4909-b929-765e8d3ec8b5 0xc001c6d577 0xc001c6d578}] [] [{kube-controller-manager Update apps/v1 2021-10-27 15:26:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"af718c60-58d6-4909-b929-765e8d3ec8b5\"}":{}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 15:26:42 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78bc8b888c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc001c6d758 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Oct 27 15:26:56.645: INFO: Pod "test-rollover-deployment-98c5f4599-cxxxq" is available: +&Pod{ObjectMeta:{test-rollover-deployment-98c5f4599-cxxxq test-rollover-deployment-98c5f4599- deployment-5214 f6cdf6dd-2c8a-4455-b19f-f9c500ff8a73 39257 0 2021-10-27 15:26:42 +0000 UTC map[name:rollover-pod pod-template-hash:98c5f4599] map[cni.projectcalico.org/containerID:f5239f76c40626f56c219b07531c759462bd22013194247e54559b9c349924a5 cni.projectcalico.org/podIP:100.96.1.48/32 cni.projectcalico.org/podIPs:100.96.1.48/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet test-rollover-deployment-98c5f4599 d1a7d1b1-e064-43f6-a571-0d52ecd9ea15 0xc001c6dda0 0xc001c6dda1}] [] [{kube-controller-manager Update v1 2021-10-27 15:26:42 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d1a7d1b1-e064-43f6-a571-0d52ecd9ea15\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-27 15:26:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-27 15:26:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.1.48\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-h5spg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmtq6-2hp.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h5spg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:26:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:26:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:26:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:26:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.3,PodIP:100.96.1.48,StartTime:2021-10-27 15:26:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-27 15:26:43 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:docker://473409c6cdbe4c42128284512158bafa15b679e327586a3d9023a6a1c0b68b82,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.1.48,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:26:56.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-5214" for this suite. +•{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":346,"completed":290,"skipped":5334,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-apps] ReplicationController + should release no longer matching pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:26:56.680: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename replication-controller +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replication-controller-8225 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 +[It] should release no longer matching pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Given a ReplicationController is created +STEP: When the matched label of one of its pods change +Oct 27 15:26:56.897: INFO: Pod name pod-release: Found 0 pods out of 1 +Oct 27 15:27:01.909: INFO: Pod name pod-release: Found 1 pods out of 1 +STEP: Then the pod is released +[AfterEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:27:01.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replication-controller-8225" for this suite. +•{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":346,"completed":291,"skipped":5344,"failed":0} +SSSSS +------------------------------ +[sig-node] Secrets + should be consumable via the environment [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:27:02.054: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-645 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable via the environment [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating secret secrets-645/secret-test-544ffe68-cf63-464a-8d51-075aab071186 +STEP: Creating a pod to test consume secrets +Oct 27 15:27:02.512: INFO: Waiting up to 5m0s for pod "pod-configmaps-2c0b66fb-1c25-4034-a87a-4be368815faf" in namespace "secrets-645" to be "Succeeded or Failed" +Oct 27 15:27:02.528: INFO: Pod "pod-configmaps-2c0b66fb-1c25-4034-a87a-4be368815faf": Phase="Pending", Reason="", readiness=false. Elapsed: 11.393687ms +Oct 27 15:27:04.608: INFO: Pod "pod-configmaps-2c0b66fb-1c25-4034-a87a-4be368815faf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.091130145s +STEP: Saw pod success +Oct 27 15:27:04.609: INFO: Pod "pod-configmaps-2c0b66fb-1c25-4034-a87a-4be368815faf" satisfied condition "Succeeded or Failed" +Oct 27 15:27:04.621: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod pod-configmaps-2c0b66fb-1c25-4034-a87a-4be368815faf container env-test: +STEP: delete the pod +Oct 27 15:27:04.660: INFO: Waiting for pod pod-configmaps-2c0b66fb-1c25-4034-a87a-4be368815faf to disappear +Oct 27 15:27:04.672: INFO: Pod pod-configmaps-2c0b66fb-1c25-4034-a87a-4be368815faf no longer exists +[AfterEach] [sig-node] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:27:04.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-645" for this suite. +•{"msg":"PASSED [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":346,"completed":292,"skipped":5349,"failed":0} +SS +------------------------------ +[sig-network] DNS + should provide DNS for pods for Hostname [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:27:04.790: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename dns +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-9701 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a test headless service +STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-9701.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-9701.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9701.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-9701.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-9701.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9701.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done + +STEP: creating a pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Oct 27 15:27:09.297: INFO: DNS probes using dns-9701/dns-test-9abd27c3-f0bc-4979-a5ad-69a2cb517abf succeeded + +STEP: deleting the pod +STEP: deleting the test headless service +[AfterEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:27:09.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-9701" for this suite. +•{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":346,"completed":293,"skipped":5351,"failed":0} +SSSS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with secret pod [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:27:09.366: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename subpath +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in subpath-7118 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] Atomic writer volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 +STEP: Setting up data +[It] should support subpaths with secret pod [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating pod pod-subpath-test-secret-5mz7 +STEP: Creating a pod to test atomic-volume-subpath +Oct 27 15:27:09.609: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-5mz7" in namespace "subpath-7118" to be "Succeeded or Failed" +Oct 27 15:27:09.620: INFO: Pod "pod-subpath-test-secret-5mz7": Phase="Pending", Reason="", readiness=false. Elapsed: 11.170879ms +Oct 27 15:27:11.632: INFO: Pod "pod-subpath-test-secret-5mz7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023381077s +Oct 27 15:27:13.645: INFO: Pod "pod-subpath-test-secret-5mz7": Phase="Running", Reason="", readiness=true. Elapsed: 4.035820841s +Oct 27 15:27:15.658: INFO: Pod "pod-subpath-test-secret-5mz7": Phase="Running", Reason="", readiness=true. Elapsed: 6.049344147s +Oct 27 15:27:17.670: INFO: Pod "pod-subpath-test-secret-5mz7": Phase="Running", Reason="", readiness=true. Elapsed: 8.06132968s +Oct 27 15:27:19.683: INFO: Pod "pod-subpath-test-secret-5mz7": Phase="Running", Reason="", readiness=true. Elapsed: 10.074348375s +Oct 27 15:27:21.696: INFO: Pod "pod-subpath-test-secret-5mz7": Phase="Running", Reason="", readiness=true. Elapsed: 12.08758363s +Oct 27 15:27:23.709: INFO: Pod "pod-subpath-test-secret-5mz7": Phase="Running", Reason="", readiness=true. Elapsed: 14.100032689s +Oct 27 15:27:25.721: INFO: Pod "pod-subpath-test-secret-5mz7": Phase="Running", Reason="", readiness=true. Elapsed: 16.112454731s +Oct 27 15:27:27.734: INFO: Pod "pod-subpath-test-secret-5mz7": Phase="Running", Reason="", readiness=true. Elapsed: 18.125093719s +Oct 27 15:27:29.747: INFO: Pod "pod-subpath-test-secret-5mz7": Phase="Running", Reason="", readiness=true. Elapsed: 20.138157193s +Oct 27 15:27:31.760: INFO: Pod "pod-subpath-test-secret-5mz7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.151271865s +STEP: Saw pod success +Oct 27 15:27:31.760: INFO: Pod "pod-subpath-test-secret-5mz7" satisfied condition "Succeeded or Failed" +Oct 27 15:27:31.772: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod pod-subpath-test-secret-5mz7 container test-container-subpath-secret-5mz7: +STEP: delete the pod +Oct 27 15:27:31.809: INFO: Waiting for pod pod-subpath-test-secret-5mz7 to disappear +Oct 27 15:27:31.821: INFO: Pod pod-subpath-test-secret-5mz7 no longer exists +STEP: Deleting pod pod-subpath-test-secret-5mz7 +Oct 27 15:27:31.821: INFO: Deleting pod "pod-subpath-test-secret-5mz7" in namespace "subpath-7118" +[AfterEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:27:31.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "subpath-7118" for this suite. +•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":346,"completed":294,"skipped":5355,"failed":0} + +------------------------------ +[sig-node] Security Context + should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:27:31.866: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename security-context +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-236 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser +Oct 27 15:27:32.125: INFO: Waiting up to 5m0s for pod "security-context-e96e072b-458c-4441-88ee-7b9f3166925e" in namespace "security-context-236" to be "Succeeded or Failed" +Oct 27 15:27:32.137: INFO: Pod "security-context-e96e072b-458c-4441-88ee-7b9f3166925e": Phase="Pending", Reason="", readiness=false. Elapsed: 11.445792ms +Oct 27 15:27:34.149: INFO: Pod "security-context-e96e072b-458c-4441-88ee-7b9f3166925e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.023868483s +STEP: Saw pod success +Oct 27 15:27:34.149: INFO: Pod "security-context-e96e072b-458c-4441-88ee-7b9f3166925e" satisfied condition "Succeeded or Failed" +Oct 27 15:27:34.162: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod security-context-e96e072b-458c-4441-88ee-7b9f3166925e container test-container: +STEP: delete the pod +Oct 27 15:27:34.198: INFO: Waiting for pod security-context-e96e072b-458c-4441-88ee-7b9f3166925e to disappear +Oct 27 15:27:34.209: INFO: Pod security-context-e96e072b-458c-4441-88ee-7b9f3166925e no longer exists +[AfterEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:27:34.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "security-context-236" for this suite. +•{"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":346,"completed":295,"skipped":5355,"failed":0} + +------------------------------ +[sig-cli] Kubectl client Kubectl patch + should add annotations for pods in rc [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:27:34.244: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-8171 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should add annotations for pods in rc [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating Agnhost RC +Oct 27 15:27:34.434: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-8171 create -f -' +Oct 27 15:27:35.117: INFO: stderr: "" +Oct 27 15:27:35.117: INFO: stdout: "replicationcontroller/agnhost-primary created\n" +STEP: Waiting for Agnhost primary to start. +Oct 27 15:27:36.130: INFO: Selector matched 1 pods for map[app:agnhost] +Oct 27 15:27:36.130: INFO: Found 0 / 1 +Oct 27 15:27:37.131: INFO: Selector matched 1 pods for map[app:agnhost] +Oct 27 15:27:37.131: INFO: Found 0 / 1 +Oct 27 15:27:38.130: INFO: Selector matched 1 pods for map[app:agnhost] +Oct 27 15:27:38.131: INFO: Found 1 / 1 +Oct 27 15:27:38.131: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 +STEP: patching all pods +Oct 27 15:27:38.142: INFO: Selector matched 1 pods for map[app:agnhost] +Oct 27 15:27:38.142: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +Oct 27 15:27:38.142: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-8171 patch pod agnhost-primary-b8h65 -p {"metadata":{"annotations":{"x":"y"}}}' +Oct 27 15:27:38.299: INFO: stderr: "" +Oct 27 15:27:38.299: INFO: stdout: "pod/agnhost-primary-b8h65 patched\n" +STEP: checking annotations +Oct 27 15:27:38.312: INFO: Selector matched 1 pods for map[app:agnhost] +Oct 27 15:27:38.312: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:27:38.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-8171" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":346,"completed":296,"skipped":5355,"failed":0} +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Secrets + should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:27:38.346: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-6765 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secret-namespace-4950 +STEP: Creating secret with name secret-test-f7226a64-4044-40d1-bc63-891985c72ad5 +STEP: Creating a pod to test consume secrets +Oct 27 15:27:38.799: INFO: Waiting up to 5m0s for pod "pod-secrets-b284e3da-4d52-480b-90b7-904c876d91c5" in namespace "secrets-6765" to be "Succeeded or Failed" +Oct 27 15:27:38.810: INFO: Pod "pod-secrets-b284e3da-4d52-480b-90b7-904c876d91c5": Phase="Pending", Reason="", readiness=false. Elapsed: 11.669694ms +Oct 27 15:27:40.824: INFO: Pod "pod-secrets-b284e3da-4d52-480b-90b7-904c876d91c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.025090008s +STEP: Saw pod success +Oct 27 15:27:40.824: INFO: Pod "pod-secrets-b284e3da-4d52-480b-90b7-904c876d91c5" satisfied condition "Succeeded or Failed" +Oct 27 15:27:40.835: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod pod-secrets-b284e3da-4d52-480b-90b7-904c876d91c5 container secret-volume-test: +STEP: delete the pod +Oct 27 15:27:40.913: INFO: Waiting for pod pod-secrets-b284e3da-4d52-480b-90b7-904c876d91c5 to disappear +Oct 27 15:27:40.924: INFO: Pod pod-secrets-b284e3da-4d52-480b-90b7-904c876d91c5 no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:27:40.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-6765" for this suite. +STEP: Destroying namespace "secret-namespace-4950" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":346,"completed":297,"skipped":5376,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Pods + should delete a collection of pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:27:40.971: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename pods +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-5464 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:188 +[It] should delete a collection of pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Create set of pods +Oct 27 15:27:41.182: INFO: created test-pod-1 +Oct 27 15:27:41.200: INFO: created test-pod-2 +Oct 27 15:27:41.217: INFO: created test-pod-3 +STEP: waiting for all 3 pods to be located +STEP: waiting for all pods to be deleted +Oct 27 15:27:41.268: INFO: Pod quantity 3 is different from expected quantity 0 +Oct 27 15:27:42.281: INFO: Pod quantity 3 is different from expected quantity 0 +Oct 27 15:27:43.280: INFO: Pod quantity 3 is different from expected quantity 0 +Oct 27 15:27:44.281: INFO: Pod quantity 3 is different from expected quantity 0 +Oct 27 15:27:45.282: INFO: Pod quantity 3 is different from expected quantity 0 +[AfterEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:27:46.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-5464" for this suite. +•{"msg":"PASSED [sig-node] Pods should delete a collection of pods [Conformance]","total":346,"completed":298,"skipped":5412,"failed":0} +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Variable Expansion + should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:27:46.314: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename var-expansion +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in var-expansion-144 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pod with failed condition +STEP: updating the pod +Oct 27 15:29:47.162: INFO: Successfully updated pod "var-expansion-b4755329-3d0c-4d82-9022-631948b96249" +STEP: waiting for pod running +STEP: deleting the pod gracefully +Oct 27 15:29:49.197: INFO: Deleting pod "var-expansion-b4755329-3d0c-4d82-9022-631948b96249" in namespace "var-expansion-144" +Oct 27 15:29:49.211: INFO: Wait up to 5m0s for pod "var-expansion-b4755329-3d0c-4d82-9022-631948b96249" to be fully deleted +[AfterEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:30:21.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-144" for this suite. +•{"msg":"PASSED [sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]","total":346,"completed":299,"skipped":5431,"failed":0} +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:30:21.293: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-2734 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name projected-configmap-test-volume-c04eadc3-0315-4f06-8529-0bca413a81b3 +STEP: Creating a pod to test consume configMaps +Oct 27 15:30:21.532: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-49cd3397-feb7-4b4e-ae9a-eeb7e3123e5b" in namespace "projected-2734" to be "Succeeded or Failed" +Oct 27 15:30:21.544: INFO: Pod "pod-projected-configmaps-49cd3397-feb7-4b4e-ae9a-eeb7e3123e5b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.320805ms +Oct 27 15:30:23.557: INFO: Pod "pod-projected-configmaps-49cd3397-feb7-4b4e-ae9a-eeb7e3123e5b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025337341s +Oct 27 15:30:25.570: INFO: Pod "pod-projected-configmaps-49cd3397-feb7-4b4e-ae9a-eeb7e3123e5b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038742238s +STEP: Saw pod success +Oct 27 15:30:25.571: INFO: Pod "pod-projected-configmaps-49cd3397-feb7-4b4e-ae9a-eeb7e3123e5b" satisfied condition "Succeeded or Failed" +Oct 27 15:30:25.583: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod pod-projected-configmaps-49cd3397-feb7-4b4e-ae9a-eeb7e3123e5b container agnhost-container: +STEP: delete the pod +Oct 27 15:30:25.658: INFO: Waiting for pod pod-projected-configmaps-49cd3397-feb7-4b4e-ae9a-eeb7e3123e5b to disappear +Oct 27 15:30:25.669: INFO: Pod pod-projected-configmaps-49cd3397-feb7-4b4e-ae9a-eeb7e3123e5b no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:30:25.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-2734" for this suite. +•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":300,"skipped":5451,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + custom resource defaulting for requests and from storage works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:30:25.703: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename custom-resource-definition +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in custom-resource-definition-7985 +STEP: Waiting for a default service account to be provisioned in namespace +[It] custom resource defaulting for requests and from storage works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:30:25.894: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:30:29.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "custom-resource-definition-7985" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":346,"completed":301,"skipped":5467,"failed":0} +SSS +------------------------------ +[sig-node] Pods + should be submitted and removed [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:30:29.623: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename pods +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-5720 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:188 +[It] should be submitted and removed [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pod +STEP: setting up watch +STEP: submitting the pod to kubernetes +Oct 27 15:30:29.835: INFO: observed the pod list +STEP: verifying the pod is in kubernetes +STEP: verifying pod creation was observed +STEP: deleting the pod gracefully +STEP: verifying pod deletion was observed +[AfterEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:30:35.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-5720" for this suite. +•{"msg":"PASSED [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]","total":346,"completed":302,"skipped":5470,"failed":0} +SSSSSSSS +------------------------------ +[sig-apps] Deployment + deployment should delete old replica sets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:30:35.249: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename deployment +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in deployment-9857 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 +[It] deployment should delete old replica sets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:30:35.473: INFO: Pod name cleanup-pod: Found 0 pods out of 1 +Oct 27 15:30:40.485: INFO: Pod name cleanup-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running +Oct 27 15:30:40.485: INFO: Creating deployment test-cleanup-deployment +STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 +Oct 27 15:30:44.600: INFO: Deployment "test-cleanup-deployment": +&Deployment{ObjectMeta:{test-cleanup-deployment deployment-9857 d1ac154e-8cd8-4053-adc5-5ca19d47ef05 40779 1 2021-10-27 15:30:40 +0000 UTC map[name:cleanup-pod] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2021-10-27 15:30:40 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 15:30:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005b78a48 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-10-27 15:30:40 +0000 UTC,LastTransitionTime:2021-10-27 15:30:40 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-cleanup-deployment-5b4d99b59b" has successfully progressed.,LastUpdateTime:2021-10-27 15:30:42 +0000 UTC,LastTransitionTime:2021-10-27 15:30:40 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} + +Oct 27 15:30:44.612: INFO: New ReplicaSet "test-cleanup-deployment-5b4d99b59b" of Deployment "test-cleanup-deployment": +&ReplicaSet{ObjectMeta:{test-cleanup-deployment-5b4d99b59b deployment-9857 8eb29dc3-5e8b-487e-bd54-2e3253aad21f 40772 1 2021-10-27 15:30:40 +0000 UTC map[name:cleanup-pod pod-template-hash:5b4d99b59b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment d1ac154e-8cd8-4053-adc5-5ca19d47ef05 0xc005b78e27 0xc005b78e28}] [] [{kube-controller-manager Update apps/v1 2021-10-27 15:30:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d1ac154e-8cd8-4053-adc5-5ca19d47ef05\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 15:30:42 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 5b4d99b59b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:5b4d99b59b] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005b78ed8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} +Oct 27 15:30:44.624: INFO: Pod "test-cleanup-deployment-5b4d99b59b-kvmg4" is available: +&Pod{ObjectMeta:{test-cleanup-deployment-5b4d99b59b-kvmg4 test-cleanup-deployment-5b4d99b59b- deployment-9857 2e8a8b6f-d3ee-4bc3-8056-59c71ea98c4e 40771 0 2021-10-27 15:30:40 +0000 UTC map[name:cleanup-pod pod-template-hash:5b4d99b59b] map[cni.projectcalico.org/containerID:78f5b347b80223580e49c07142e207d24a3bffa6d9b9061cc0bf8e213823fb43 cni.projectcalico.org/podIP:100.96.1.63/32 cni.projectcalico.org/podIPs:100.96.1.63/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet test-cleanup-deployment-5b4d99b59b 8eb29dc3-5e8b-487e-bd54-2e3253aad21f 0xc005b79277 0xc005b79278}] [] [{kube-controller-manager Update v1 2021-10-27 15:30:40 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8eb29dc3-5e8b-487e-bd54-2e3253aad21f\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-27 15:30:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-27 15:30:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.1.63\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-z68z4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmtq6-2hp.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z68z4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:30:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:30:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:30:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:30:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.3,PodIP:100.96.1.63,StartTime:2021-10-27 15:30:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-27 15:30:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:docker://265693de1ea620cbd7df465303945c8e58b8918ff94995d3928f33fce2f012d4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.1.63,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:30:44.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-9857" for this suite. +•{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":346,"completed":303,"skipped":5478,"failed":0} +SSSSS +------------------------------ +[sig-node] Downward API + should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:30:44.658: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-3064 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward api env vars +Oct 27 15:30:44.878: INFO: Waiting up to 5m0s for pod "downward-api-b46a5fa8-9ba4-436d-9933-0a9d1377db43" in namespace "downward-api-3064" to be "Succeeded or Failed" +Oct 27 15:30:44.889: INFO: Pod "downward-api-b46a5fa8-9ba4-436d-9933-0a9d1377db43": Phase="Pending", Reason="", readiness=false. Elapsed: 11.022599ms +Oct 27 15:30:46.903: INFO: Pod "downward-api-b46a5fa8-9ba4-436d-9933-0a9d1377db43": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024537823s +Oct 27 15:30:48.916: INFO: Pod "downward-api-b46a5fa8-9ba4-436d-9933-0a9d1377db43": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038056705s +STEP: Saw pod success +Oct 27 15:30:48.916: INFO: Pod "downward-api-b46a5fa8-9ba4-436d-9933-0a9d1377db43" satisfied condition "Succeeded or Failed" +Oct 27 15:30:48.928: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod downward-api-b46a5fa8-9ba4-436d-9933-0a9d1377db43 container dapi-container: +STEP: delete the pod +Oct 27 15:30:48.970: INFO: Waiting for pod downward-api-b46a5fa8-9ba4-436d-9933-0a9d1377db43 to disappear +Oct 27 15:30:48.983: INFO: Pod downward-api-b46a5fa8-9ba4-436d-9933-0a9d1377db43 no longer exists +[AfterEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:30:48.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-3064" for this suite. +•{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":346,"completed":304,"skipped":5483,"failed":0} +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Variable Expansion + should allow composing env vars into new env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:30:49.017: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename var-expansion +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in var-expansion-437 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should allow composing env vars into new env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test env composition +Oct 27 15:30:49.229: INFO: Waiting up to 5m0s for pod "var-expansion-c8329bb7-df52-4a42-94e6-535e37fff630" in namespace "var-expansion-437" to be "Succeeded or Failed" +Oct 27 15:30:49.240: INFO: Pod "var-expansion-c8329bb7-df52-4a42-94e6-535e37fff630": Phase="Pending", Reason="", readiness=false. Elapsed: 11.107757ms +Oct 27 15:30:51.252: INFO: Pod "var-expansion-c8329bb7-df52-4a42-94e6-535e37fff630": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0233582s +Oct 27 15:30:53.265: INFO: Pod "var-expansion-c8329bb7-df52-4a42-94e6-535e37fff630": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035535101s +STEP: Saw pod success +Oct 27 15:30:53.265: INFO: Pod "var-expansion-c8329bb7-df52-4a42-94e6-535e37fff630" satisfied condition "Succeeded or Failed" +Oct 27 15:30:53.276: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod var-expansion-c8329bb7-df52-4a42-94e6-535e37fff630 container dapi-container: +STEP: delete the pod +Oct 27 15:30:53.312: INFO: Waiting for pod var-expansion-c8329bb7-df52-4a42-94e6-535e37fff630 to disappear +Oct 27 15:30:53.323: INFO: Pod var-expansion-c8329bb7-df52-4a42-94e6-535e37fff630 no longer exists +[AfterEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:30:53.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-437" for this suite. +•{"msg":"PASSED [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":346,"completed":305,"skipped":5500,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Kubelet when scheduling a busybox command in a pod + should print the output to logs [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:30:53.357: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubelet-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubelet-test-4957 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 +[It] should print the output to logs [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:30:53.580: INFO: The status of Pod busybox-scheduling-ecd09d1b-93b2-4782-8981-72aa7b8dd55d is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:30:55.593: INFO: The status of Pod busybox-scheduling-ecd09d1b-93b2-4782-8981-72aa7b8dd55d is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:30:57.595: INFO: The status of Pod busybox-scheduling-ecd09d1b-93b2-4782-8981-72aa7b8dd55d is Running (Ready = true) +[AfterEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:30:57.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubelet-test-4957" for this suite. +•{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":346,"completed":306,"skipped":5525,"failed":0} + +------------------------------ +[sig-node] Pods + should support remote command execution over websockets [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:30:57.660: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename pods +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-8956 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:188 +[It] should support remote command execution over websockets [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:30:57.850: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: creating the pod +STEP: submitting the pod to kubernetes +Oct 27 15:30:57.886: INFO: The status of Pod pod-exec-websocket-12fa26b9-4e8c-4736-acfd-438d94814ecb is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:30:59.899: INFO: The status of Pod pod-exec-websocket-12fa26b9-4e8c-4736-acfd-438d94814ecb is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:31:01.899: INFO: The status of Pod pod-exec-websocket-12fa26b9-4e8c-4736-acfd-438d94814ecb is Running (Ready = true) +[AfterEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:31:02.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-8956" for this suite. +•{"msg":"PASSED [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":346,"completed":307,"skipped":5525,"failed":0} +SSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:31:02.088: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-197 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name projected-configmap-test-volume-map-ee29cb78-2289-4d1b-b6bb-6331be6d3ff8 +STEP: Creating a pod to test consume configMaps +Oct 27 15:31:02.312: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f9c92fb9-1f9e-4731-b583-3da68982a5ee" in namespace "projected-197" to be "Succeeded or Failed" +Oct 27 15:31:02.325: INFO: Pod "pod-projected-configmaps-f9c92fb9-1f9e-4731-b583-3da68982a5ee": Phase="Pending", Reason="", readiness=false. Elapsed: 12.188308ms +Oct 27 15:31:04.337: INFO: Pod "pod-projected-configmaps-f9c92fb9-1f9e-4731-b583-3da68982a5ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.024730748s +STEP: Saw pod success +Oct 27 15:31:04.337: INFO: Pod "pod-projected-configmaps-f9c92fb9-1f9e-4731-b583-3da68982a5ee" satisfied condition "Succeeded or Failed" +Oct 27 15:31:04.348: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod pod-projected-configmaps-f9c92fb9-1f9e-4731-b583-3da68982a5ee container agnhost-container: +STEP: delete the pod +Oct 27 15:31:04.387: INFO: Waiting for pod pod-projected-configmaps-f9c92fb9-1f9e-4731-b583-3da68982a5ee to disappear +Oct 27 15:31:04.400: INFO: Pod pod-projected-configmaps-f9c92fb9-1f9e-4731-b583-3da68982a5ee no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:31:04.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-197" for this suite. +•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":346,"completed":308,"skipped":5531,"failed":0} +SSS +------------------------------ +[sig-node] Pods + should contain environment variables for services [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:31:04.435: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename pods +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-5677 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:188 +[It] should contain environment variables for services [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:31:04.654: INFO: The status of Pod server-envvars-3043b84c-c935-42e7-8af5-af8ef0d1d998 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:31:06.667: INFO: The status of Pod server-envvars-3043b84c-c935-42e7-8af5-af8ef0d1d998 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:31:08.667: INFO: The status of Pod server-envvars-3043b84c-c935-42e7-8af5-af8ef0d1d998 is Running (Ready = true) +Oct 27 15:31:08.715: INFO: Waiting up to 5m0s for pod "client-envvars-907a6a23-8a9d-465b-8109-717616e19003" in namespace "pods-5677" to be "Succeeded or Failed" +Oct 27 15:31:08.727: INFO: Pod "client-envvars-907a6a23-8a9d-465b-8109-717616e19003": Phase="Pending", Reason="", readiness=false. Elapsed: 12.307729ms +Oct 27 15:31:10.741: INFO: Pod "client-envvars-907a6a23-8a9d-465b-8109-717616e19003": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025838728s +Oct 27 15:31:12.754: INFO: Pod "client-envvars-907a6a23-8a9d-465b-8109-717616e19003": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038774139s +STEP: Saw pod success +Oct 27 15:31:12.795: INFO: Pod "client-envvars-907a6a23-8a9d-465b-8109-717616e19003" satisfied condition "Succeeded or Failed" +Oct 27 15:31:12.808: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod client-envvars-907a6a23-8a9d-465b-8109-717616e19003 container env3cont: +STEP: delete the pod +Oct 27 15:31:12.850: INFO: Waiting for pod client-envvars-907a6a23-8a9d-465b-8109-717616e19003 to disappear +Oct 27 15:31:12.862: INFO: Pod client-envvars-907a6a23-8a9d-465b-8109-717616e19003 no longer exists +[AfterEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:31:12.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-5677" for this suite. +•{"msg":"PASSED [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":346,"completed":309,"skipped":5534,"failed":0} +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a configMap. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:31:12.900: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename resourcequota +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-105 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should create a ResourceQuota and capture the life of a configMap. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Counting existing ResourceQuota +STEP: Creating a ResourceQuota +STEP: Ensuring resource quota status is calculated +STEP: Creating a ConfigMap +STEP: Ensuring resource quota status captures configMap creation +STEP: Deleting a ConfigMap +STEP: Ensuring resource quota status released usage +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:31:41.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-105" for this suite. +•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":346,"completed":310,"skipped":5552,"failed":0} +SSSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:31:41.236: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename statefulset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-3136 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:92 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:107 +STEP: Creating service test in namespace statefulset-3136 +[It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating stateful set ss in namespace statefulset-3136 +STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-3136 +Oct 27 15:31:41.484: INFO: Found 0 stateful pods, waiting for 1 +Oct 27 15:31:51.499: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod +Oct 27 15:31:51.512: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-3136 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Oct 27 15:31:51.918: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Oct 27 15:31:51.918: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Oct 27 15:31:51.918: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Oct 27 15:31:51.948: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true +Oct 27 15:32:01.989: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false +Oct 27 15:32:01.989: INFO: Waiting for statefulset status.replicas updated to 0 +Oct 27 15:32:02.037: INFO: POD NODE PHASE GRACE CONDITIONS +Oct 27 15:32:02.037: INFO: ss-0 shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 15:31:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-27 15:31:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-27 15:31:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 15:31:41 +0000 UTC }] +Oct 27 15:32:02.037: INFO: +Oct 27 15:32:02.037: INFO: StatefulSet ss has not reached scale 3, at 1 +Oct 27 15:32:03.050: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.987159521s +Oct 27 15:32:04.063: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.974926555s +Oct 27 15:32:05.075: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.962234825s +Oct 27 15:32:06.089: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.949438404s +Oct 27 15:32:07.101: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.936113633s +Oct 27 15:32:08.114: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.9231718s +Oct 27 15:32:09.126: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.910806505s +Oct 27 15:32:10.139: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.898422506s +Oct 27 15:32:11.153: INFO: Verifying statefulset ss doesn't scale past 3 for another 885.48642ms +STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-3136 +Oct 27 15:32:12.232: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-3136 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:32:12.620: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Oct 27 15:32:12.620: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Oct 27 15:32:12.620: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Oct 27 15:32:12.620: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-3136 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:32:13.003: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" +Oct 27 15:32:13.003: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Oct 27 15:32:13.003: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Oct 27 15:32:13.004: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-3136 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:32:13.360: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" +Oct 27 15:32:13.360: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Oct 27 15:32:13.360: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Oct 27 15:32:13.373: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false +Oct 27 15:32:23.387: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +Oct 27 15:32:23.387: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true +Oct 27 15:32:23.387: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true +STEP: Scale down will not halt with unhealthy stateful pod +Oct 27 15:32:23.400: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-3136 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Oct 27 15:32:23.768: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Oct 27 15:32:23.768: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Oct 27 15:32:23.768: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Oct 27 15:32:23.768: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-3136 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Oct 27 15:32:24.131: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Oct 27 15:32:24.131: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Oct 27 15:32:24.131: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Oct 27 15:32:24.131: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-3136 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Oct 27 15:32:24.548: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Oct 27 15:32:24.548: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Oct 27 15:32:24.548: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Oct 27 15:32:24.548: INFO: Waiting for statefulset status.replicas updated to 0 +Oct 27 15:32:24.560: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 +Oct 27 15:32:34.586: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false +Oct 27 15:32:34.586: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false +Oct 27 15:32:34.586: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false +Oct 27 15:32:34.624: INFO: POD NODE PHASE GRACE CONDITIONS +Oct 27 15:32:34.624: INFO: ss-0 shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 15:31:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-27 15:32:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-27 15:32:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 15:31:41 +0000 UTC }] +Oct 27 15:32:34.624: INFO: ss-1 shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 15:32:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-27 15:32:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-27 15:32:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 15:32:02 +0000 UTC }] +Oct 27 15:32:34.625: INFO: ss-2 shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 15:32:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-27 15:32:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-27 15:32:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 15:32:02 +0000 UTC }] +Oct 27 15:32:34.625: INFO: +Oct 27 15:32:34.625: INFO: StatefulSet ss has not reached scale 0, at 3 +Oct 27 15:32:35.637: INFO: POD NODE PHASE GRACE CONDITIONS +Oct 27 15:32:35.637: INFO: ss-1 shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 15:32:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-27 15:32:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-27 15:32:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 15:32:02 +0000 UTC }] +Oct 27 15:32:35.637: INFO: ss-2 shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 15:32:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-27 15:32:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-27 15:32:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-27 15:32:02 +0000 UTC }] +Oct 27 15:32:35.637: INFO: +Oct 27 15:32:35.637: INFO: StatefulSet ss has not reached scale 0, at 2 +Oct 27 15:32:36.649: INFO: Verifying statefulset ss doesn't scale past 0 for another 7.973103972s +Oct 27 15:32:37.661: INFO: Verifying statefulset ss doesn't scale past 0 for another 6.960931873s +Oct 27 15:32:38.673: INFO: Verifying statefulset ss doesn't scale past 0 for another 5.949138563s +Oct 27 15:32:39.684: INFO: Verifying statefulset ss doesn't scale past 0 for another 4.937104167s +Oct 27 15:32:40.696: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.925745926s +Oct 27 15:32:41.708: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.914194008s +Oct 27 15:32:42.722: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.901697916s +Oct 27 15:32:43.735: INFO: Verifying statefulset ss doesn't scale past 0 for another 887.845943ms +STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-3136 +Oct 27 15:32:44.747: INFO: Scaling statefulset ss to 0 +Oct 27 15:32:44.783: INFO: Waiting for statefulset status.replicas updated to 0 +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118 +Oct 27 15:32:44.794: INFO: Deleting all statefulset in ns statefulset-3136 +Oct 27 15:32:44.806: INFO: Scaling statefulset ss to 0 +Oct 27 15:32:44.843: INFO: Waiting for statefulset status.replicas updated to 0 +Oct 27 15:32:44.855: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:32:44.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-3136" for this suite. +•{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":346,"completed":311,"skipped":5557,"failed":0} + +------------------------------ +[sig-network] Proxy version v1 + should proxy through a service and a pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] version v1 + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:32:44.926: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename proxy +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in proxy-9209 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should proxy through a service and a pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: starting an echo server on multiple ports +STEP: creating replication controller proxy-service-s95kn in namespace proxy-9209 +I1027 15:32:45.161624 5683 runners.go:190] Created replication controller with name: proxy-service-s95kn, namespace: proxy-9209, replica count: 1 +I1027 15:32:46.213615 5683 runners.go:190] proxy-service-s95kn Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I1027 15:32:47.214092 5683 runners.go:190] proxy-service-s95kn Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady +I1027 15:32:48.215257 5683 runners.go:190] proxy-service-s95kn Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Oct 27 15:32:48.226: INFO: setup took 3.101041956s, starting test cases +STEP: running 16 cases, 20 attempts per case, 320 total attempts +Oct 27 15:32:48.338: INFO: (0) /api/v1/namespaces/proxy-9209/pods/proxy-service-s95kn-ztqbq:160/proxy/: foo (200; 110.949424ms) +Oct 27 15:32:48.343: INFO: (0) /api/v1/namespaces/proxy-9209/pods/http:proxy-service-s95kn-ztqbq:160/proxy/: foo (200; 115.819093ms) +Oct 27 15:32:48.344: INFO: (0) /api/v1/namespaces/proxy-9209/pods/proxy-service-s95kn-ztqbq/proxy/: test (200; 116.890069ms) +Oct 27 15:32:48.344: INFO: (0) /api/v1/namespaces/proxy-9209/services/http:proxy-service-s95kn:portname1/proxy/: foo (200; 117.03523ms) +Oct 27 15:32:48.349: INFO: (0) /api/v1/namespaces/proxy-9209/services/proxy-service-s95kn:portname2/proxy/: bar (200; 121.889072ms) +Oct 27 15:32:48.349: INFO: (0) /api/v1/namespaces/proxy-9209/pods/http:proxy-service-s95kn-ztqbq:1080/proxy/: ... (200; 121.915807ms) +Oct 27 15:32:48.349: INFO: (0) /api/v1/namespaces/proxy-9209/pods/http:proxy-service-s95kn-ztqbq:162/proxy/: bar (200; 122.015795ms) +Oct 27 15:32:48.349: INFO: (0) /api/v1/namespaces/proxy-9209/pods/proxy-service-s95kn-ztqbq:1080/proxy/: test<... (200; 122.000921ms) +Oct 27 15:32:48.349: INFO: (0) /api/v1/namespaces/proxy-9209/services/http:proxy-service-s95kn:portname2/proxy/: bar (200; 121.936027ms) +Oct 27 15:32:48.349: INFO: (0) /api/v1/namespaces/proxy-9209/pods/proxy-service-s95kn-ztqbq:162/proxy/: bar (200; 122.103021ms) +Oct 27 15:32:48.349: INFO: (0) /api/v1/namespaces/proxy-9209/services/proxy-service-s95kn:portname1/proxy/: foo (200; 121.980965ms) +Oct 27 15:32:48.429: INFO: (0) /api/v1/namespaces/proxy-9209/services/https:proxy-service-s95kn:tlsportname1/proxy/: tls baz (200; 201.992679ms) +Oct 27 15:32:48.429: INFO: (0) /api/v1/namespaces/proxy-9209/pods/https:proxy-service-s95kn-ztqbq:462/proxy/: tls qux (200; 202.096158ms) +Oct 27 15:32:48.429: INFO: (0) /api/v1/namespaces/proxy-9209/pods/https:proxy-service-s95kn-ztqbq:460/proxy/: tls baz (200; 202.247409ms) +Oct 27 15:32:48.429: INFO: (0) /api/v1/namespaces/proxy-9209/services/https:proxy-service-s95kn:tlsportname2/proxy/: tls qux (200; 202.15571ms) +Oct 27 15:32:48.429: INFO: (0) /api/v1/namespaces/proxy-9209/pods/https:proxy-service-s95kn-ztqbq:443/proxy/: test<... (200; 15.616502ms) +Oct 27 15:32:48.445: INFO: (1) /api/v1/namespaces/proxy-9209/pods/proxy-service-s95kn-ztqbq/proxy/: test (200; 15.637938ms) +Oct 27 15:32:48.445: INFO: (1) /api/v1/namespaces/proxy-9209/pods/proxy-service-s95kn-ztqbq:162/proxy/: bar (200; 15.757088ms) +Oct 27 15:32:48.445: INFO: (1) /api/v1/namespaces/proxy-9209/pods/https:proxy-service-s95kn-ztqbq:443/proxy/: ... (200; 31.660079ms) +Oct 27 15:32:48.461: INFO: (1) /api/v1/namespaces/proxy-9209/services/proxy-service-s95kn:portname2/proxy/: bar (200; 31.838429ms) +Oct 27 15:32:48.461: INFO: (1) /api/v1/namespaces/proxy-9209/services/proxy-service-s95kn:portname1/proxy/: foo (200; 31.767659ms) +Oct 27 15:32:48.461: INFO: (1) /api/v1/namespaces/proxy-9209/pods/proxy-service-s95kn-ztqbq:160/proxy/: foo (200; 31.726937ms) +Oct 27 15:32:48.461: INFO: (1) /api/v1/namespaces/proxy-9209/pods/https:proxy-service-s95kn-ztqbq:462/proxy/: tls qux (200; 31.868546ms) +Oct 27 15:32:48.477: INFO: (2) /api/v1/namespaces/proxy-9209/pods/proxy-service-s95kn-ztqbq:162/proxy/: bar (200; 16.146745ms) +Oct 27 15:32:48.478: INFO: (2) /api/v1/namespaces/proxy-9209/pods/http:proxy-service-s95kn-ztqbq:160/proxy/: foo (200; 16.490258ms) +Oct 27 15:32:48.478: INFO: (2) /api/v1/namespaces/proxy-9209/pods/https:proxy-service-s95kn-ztqbq:462/proxy/: tls qux (200; 16.561581ms) +Oct 27 15:32:48.478: INFO: (2) /api/v1/namespaces/proxy-9209/pods/https:proxy-service-s95kn-ztqbq:460/proxy/: tls baz (200; 17.154201ms) +Oct 27 15:32:48.478: INFO: (2) /api/v1/namespaces/proxy-9209/services/https:proxy-service-s95kn:tlsportname2/proxy/: tls qux (200; 17.038555ms) +Oct 27 15:32:48.478: INFO: (2) /api/v1/namespaces/proxy-9209/services/https:proxy-service-s95kn:tlsportname1/proxy/: tls baz (200; 17.061822ms) +Oct 27 15:32:48.528: INFO: (2) /api/v1/namespaces/proxy-9209/services/proxy-service-s95kn:portname1/proxy/: foo (200; 67.018048ms) +Oct 27 15:32:48.529: INFO: (2) /api/v1/namespaces/proxy-9209/services/http:proxy-service-s95kn:portname1/proxy/: foo (200; 67.874811ms) +Oct 27 15:32:48.529: INFO: (2) /api/v1/namespaces/proxy-9209/pods/https:proxy-service-s95kn-ztqbq:443/proxy/: test<... (200; 67.949206ms) +Oct 27 15:32:48.529: INFO: (2) /api/v1/namespaces/proxy-9209/pods/proxy-service-s95kn-ztqbq/proxy/: test (200; 68.004489ms) +Oct 27 15:32:48.574: INFO: (2) /api/v1/namespaces/proxy-9209/pods/http:proxy-service-s95kn-ztqbq:162/proxy/: bar (200; 113.072625ms) +Oct 27 15:32:48.574: INFO: (2) /api/v1/namespaces/proxy-9209/pods/http:proxy-service-s95kn-ztqbq:1080/proxy/: ... (200; 113.107998ms) +Oct 27 15:32:48.574: INFO: (2) /api/v1/namespaces/proxy-9209/pods/proxy-service-s95kn-ztqbq:160/proxy/: foo (200; 113.131115ms) +Oct 27 15:32:48.574: INFO: (2) /api/v1/namespaces/proxy-9209/services/proxy-service-s95kn:portname2/proxy/: bar (200; 113.103647ms) +Oct 27 15:32:48.595: INFO: (3) /api/v1/namespaces/proxy-9209/pods/http:proxy-service-s95kn-ztqbq:160/proxy/: foo (200; 20.62205ms) +Oct 27 15:32:48.595: INFO: (3) /api/v1/namespaces/proxy-9209/pods/proxy-service-s95kn-ztqbq:162/proxy/: bar (200; 20.685359ms) +Oct 27 15:32:48.595: INFO: (3) /api/v1/namespaces/proxy-9209/services/https:proxy-service-s95kn:tlsportname1/proxy/: tls baz (200; 20.686476ms) +Oct 27 15:32:48.595: INFO: (3) /api/v1/namespaces/proxy-9209/pods/http:proxy-service-s95kn-ztqbq:162/proxy/: bar (200; 20.787642ms) +Oct 27 15:32:48.595: INFO: (3) /api/v1/namespaces/proxy-9209/pods/https:proxy-service-s95kn-ztqbq:460/proxy/: tls baz (200; 20.770365ms) +Oct 27 15:32:48.595: INFO: (3) /api/v1/namespaces/proxy-9209/pods/https:proxy-service-s95kn-ztqbq:443/proxy/: test<... (200; 20.683691ms) +Oct 27 15:32:48.595: INFO: (3) /api/v1/namespaces/proxy-9209/pods/http:proxy-service-s95kn-ztqbq:1080/proxy/: ... (200; 20.693354ms) +Oct 27 15:32:48.595: INFO: (3) /api/v1/namespaces/proxy-9209/pods/proxy-service-s95kn-ztqbq/proxy/: test (200; 20.726073ms) +Oct 27 15:32:48.599: INFO: (3) /api/v1/namespaces/proxy-9209/services/http:proxy-service-s95kn:portname2/proxy/: bar (200; 24.074263ms) +Oct 27 15:32:48.599: INFO: (3) /api/v1/namespaces/proxy-9209/services/proxy-service-s95kn:portname2/proxy/: bar (200; 24.028931ms) +Oct 27 15:32:48.601: INFO: (3) /api/v1/namespaces/proxy-9209/pods/proxy-service-s95kn-ztqbq:160/proxy/: foo (200; 26.204919ms) +Oct 27 15:32:48.602: INFO: (3) /api/v1/namespaces/proxy-9209/services/http:proxy-service-s95kn:portname1/proxy/: foo (200; 26.89875ms) +Oct 27 15:32:48.602: INFO: (3) /api/v1/namespaces/proxy-9209/services/https:proxy-service-s95kn:tlsportname2/proxy/: tls qux (200; 27.775097ms) +Oct 27 15:32:48.629: INFO: (4) /api/v1/namespaces/proxy-9209/pods/http:proxy-service-s95kn-ztqbq:160/proxy/: foo (200; 26.195045ms) +Oct 27 15:32:48.629: INFO: (4) /api/v1/namespaces/proxy-9209/pods/proxy-service-s95kn-ztqbq:162/proxy/: bar (200; 26.106621ms) +Oct 27 15:32:48.629: INFO: (4) /api/v1/namespaces/proxy-9209/pods/http:proxy-service-s95kn-ztqbq:162/proxy/: bar (200; 26.042982ms) +Oct 27 15:32:48.629: INFO: (4) /api/v1/namespaces/proxy-9209/pods/proxy-service-s95kn-ztqbq/proxy/: test (200; 26.092038ms) +Oct 27 15:32:48.629: INFO: (4) /api/v1/namespaces/proxy-9209/services/https:proxy-service-s95kn:tlsportname1/proxy/: tls baz (200; 26.174093ms) +Oct 27 15:32:48.629: INFO: (4) /api/v1/namespaces/proxy-9209/pods/https:proxy-service-s95kn-ztqbq:462/proxy/: tls qux (200; 26.224319ms) +Oct 27 15:32:48.629: INFO: (4) /api/v1/namespaces/proxy-9209/pods/proxy-service-s95kn-ztqbq:160/proxy/: foo (200; 26.127553ms) +Oct 27 15:32:48.629: INFO: (4) /api/v1/namespaces/proxy-9209/pods/https:proxy-service-s95kn-ztqbq:460/proxy/: tls baz (200; 26.189251ms) +Oct 27 15:32:48.629: INFO: (4) /api/v1/namespaces/proxy-9209/pods/https:proxy-service-s95kn-ztqbq:443/proxy/: ... (200; 30.529446ms) +Oct 27 15:32:48.634: INFO: (4) /api/v1/namespaces/proxy-9209/pods/proxy-service-s95kn-ztqbq:1080/proxy/: test<... (200; 30.877952ms) +Oct 27 15:32:48.634: INFO: (4) /api/v1/namespaces/proxy-9209/services/https:proxy-service-s95kn:tlsportname2/proxy/: tls qux (200; 31.574387ms) +Oct 27 15:32:48.654: INFO: (5) /api/v1/namespaces/proxy-9209/pods/https:proxy-service-s95kn-ztqbq:462/proxy/: tls qux (200; 19.277734ms) +Oct 27 15:32:48.654: INFO: (5) /api/v1/namespaces/proxy-9209/pods/http:proxy-service-s95kn-ztqbq:1080/proxy/: ... (200; 19.203645ms) +Oct 27 15:32:48.654: INFO: (5) /api/v1/namespaces/proxy-9209/pods/http:proxy-service-s95kn-ztqbq:162/proxy/: bar (200; 19.354946ms) +Oct 27 15:32:48.654: INFO: (5) /api/v1/namespaces/proxy-9209/pods/https:proxy-service-s95kn-ztqbq:443/proxy/: test (200; 19.308937ms) +Oct 27 15:32:48.654: INFO: (5) /api/v1/namespaces/proxy-9209/services/proxy-service-s95kn:portname2/proxy/: bar (200; 19.323731ms) +Oct 27 15:32:48.654: INFO: (5) /api/v1/namespaces/proxy-9209/pods/https:proxy-service-s95kn-ztqbq:460/proxy/: tls baz (200; 19.48438ms) +Oct 27 15:32:48.654: INFO: (5) /api/v1/namespaces/proxy-9209/pods/proxy-service-s95kn-ztqbq:1080/proxy/: test<... (200; 19.362864ms) +Oct 27 15:32:48.654: INFO: (5) /api/v1/namespaces/proxy-9209/pods/proxy-service-s95kn-ztqbq:160/proxy/: foo (200; 19.277388ms) +Oct 27 15:32:48.657: INFO: (5) /api/v1/namespaces/proxy-9209/services/http:proxy-service-s95kn:portname2/proxy/: bar (200; 22.644976ms) +Oct 27 15:32:48.657: INFO: (5) /api/v1/namespaces/proxy-9209/services/http:proxy-service-s95kn:portname1/proxy/: foo (200; 22.708827ms) +Oct 27 15:32:48.657: INFO: (5) /api/v1/namespaces/proxy-9209/pods/http:proxy-service-s95kn-ztqbq:160/proxy/: foo (200; 22.887628ms) +Oct 27 15:32:48.658: INFO: (5) /api/v1/namespaces/proxy-9209/pods/proxy-service-s95kn-ztqbq:162/proxy/: bar (200; 23.365799ms) +Oct 27 15:32:48.661: INFO: (5) /api/v1/namespaces/proxy-9209/services/https:proxy-service-s95kn:tlsportname2/proxy/: tls qux (200; 26.489085ms) +Oct 27 15:32:48.739: INFO: (6) /api/v1/namespaces/proxy-9209/pods/http:proxy-service-s95kn-ztqbq:160/proxy/: foo (200; 77.326594ms) +Oct 27 15:32:48.739: INFO: (6) /api/v1/namespaces/proxy-9209/pods/proxy-service-s95kn-ztqbq:162/proxy/: bar (200; 77.569139ms) +Oct 27 15:32:48.739: INFO: (6) /api/v1/namespaces/proxy-9209/pods/http:proxy-service-s95kn-ztqbq:1080/proxy/: ... (200; 77.397753ms) +Oct 27 15:32:48.739: INFO: (6) /api/v1/namespaces/proxy-9209/pods/proxy-service-s95kn-ztqbq/proxy/: test (200; 77.439734ms) +Oct 27 15:32:48.739: INFO: (6) /api/v1/namespaces/proxy-9209/services/https:proxy-service-s95kn:tlsportname2/proxy/: tls qux (200; 77.440843ms) +Oct 27 15:32:48.739: INFO: (6) /api/v1/namespaces/proxy-9209/pods/https:proxy-service-s95kn-ztqbq:443/proxy/: test<... (200; 77.608627ms) +Oct 27 15:32:48.739: INFO: (6) /api/v1/namespaces/proxy-9209/services/https:proxy-service-s95kn:tlsportname1/proxy/: tls baz (200; 77.548883ms) +Oct 27 15:32:48.739: INFO: (6) /api/v1/namespaces/proxy-9209/pods/https:proxy-service-s95kn-ztqbq:462/proxy/: tls qux (200; 77.472037ms) +Oct 27 15:32:48.739: INFO: (6) /api/v1/namespaces/proxy-9209/pods/http:proxy-service-s95kn-ztqbq:162/proxy/: bar (200; 77.509747ms) +Oct 27 15:32:48.739: INFO: (6) /api/v1/namespaces/proxy-9209/pods/proxy-service-s95kn-ztqbq:160/proxy/: foo (200; 77.669977ms) +Oct 27 15:32:48.743: INFO: (6) /api/v1/namespaces/proxy-9209/services/http:proxy-service-s95kn:portname2/proxy/: bar (200; 81.423836ms) +Oct 27 15:32:48.744: INFO: (6) /api/v1/namespaces/proxy-9209/services/http:proxy-service-s95kn:portname1/proxy/: foo (200; 82.293642ms) +Oct 27 15:32:48.744: INFO: (6) /api/v1/namespaces/proxy-9209/services/proxy-service-s95kn:portname2/proxy/: bar (200; 82.33704ms) +Oct 27 15:32:48.744: INFO: (6) /api/v1/namespaces/proxy-9209/services/proxy-service-s95kn:portname1/proxy/: foo (200; 82.377447ms) +Oct 27 15:32:48.762: INFO: (7) /api/v1/namespaces/proxy-9209/pods/https:proxy-service-s95kn-ztqbq:462/proxy/: tls qux (200; 18.016345ms) +Oct 27 15:32:48.762: INFO: (7) /api/v1/namespaces/proxy-9209/services/https:proxy-service-s95kn:tlsportname2/proxy/: tls qux (200; 17.856436ms) +Oct 27 15:32:48.762: INFO: (7) /api/v1/namespaces/proxy-9209/pods/http:proxy-service-s95kn-ztqbq:162/proxy/: bar (200; 17.85389ms) +Oct 27 15:32:48.762: INFO: (7) /api/v1/namespaces/proxy-9209/pods/https:proxy-service-s95kn-ztqbq:460/proxy/: tls baz (200; 18.010015ms) +Oct 27 15:32:48.762: INFO: (7) /api/v1/namespaces/proxy-9209/services/https:proxy-service-s95kn:tlsportname1/proxy/: tls baz (200; 17.9236ms) +Oct 27 15:32:48.833: INFO: (7) /api/v1/namespaces/proxy-9209/pods/proxy-service-s95kn-ztqbq:162/proxy/: bar (200; 88.874702ms) +Oct 27 15:32:48.833: INFO: (7) /api/v1/namespaces/proxy-9209/pods/proxy-service-s95kn-ztqbq/proxy/: test (200; 88.974113ms) +Oct 27 15:32:48.833: INFO: (7) /api/v1/namespaces/proxy-9209/pods/proxy-service-s95kn-ztqbq:1080/proxy/: test<... (200; 88.863324ms) +Oct 27 15:32:48.833: INFO: (7) /api/v1/namespaces/proxy-9209/pods/proxy-service-s95kn-ztqbq:160/proxy/: foo (200; 89.021669ms) +Oct 27 15:32:48.833: INFO: (7) /api/v1/namespaces/proxy-9209/pods/http:proxy-service-s95kn-ztqbq:1080/proxy/: ... (200; 88.873515ms) +Oct 27 15:32:48.833: INFO: (7) /api/v1/namespaces/proxy-9209/services/http:proxy-service-s95kn:portname2/proxy/: bar (200; 88.958344ms) +Oct 27 15:32:48.833: INFO: (7) /api/v1/namespaces/proxy-9209/pods/https:proxy-service-s95kn-ztqbq:443/proxy/: test<... (200; 21.440722ms) +Oct 27 15:32:48.858: INFO: (8) /api/v1/namespaces/proxy-9209/pods/https:proxy-service-s95kn-ztqbq:462/proxy/: tls qux (200; 21.504189ms) +Oct 27 15:32:48.858: INFO: (8) /api/v1/namespaces/proxy-9209/pods/proxy-service-s95kn-ztqbq/proxy/: test (200; 21.427264ms) +Oct 27 15:32:48.858: INFO: (8) /api/v1/namespaces/proxy-9209/pods/proxy-service-s95kn-ztqbq:160/proxy/: foo (200; 21.46631ms) +Oct 27 15:32:48.858: INFO: (8) /api/v1/namespaces/proxy-9209/pods/http:proxy-service-s95kn-ztqbq:1080/proxy/: ... (200; 21.587647ms) +Oct 27 15:32:48.858: INFO: (8) /api/v1/namespaces/proxy-9209/pods/http:proxy-service-s95kn-ztqbq:160/proxy/: foo (200; 21.492187ms) +Oct 27 15:32:48.858: INFO: (8) /api/v1/namespaces/proxy-9209/pods/https:proxy-service-s95kn-ztqbq:460/proxy/: tls baz (200; 21.452586ms) +Oct 27 15:32:48.858: INFO: (8) /api/v1/namespaces/proxy-9209/services/https:proxy-service-s95kn:tlsportname1/proxy/: tls baz (200; 21.638437ms) +Oct 27 15:32:48.858: INFO: (8) /api/v1/namespaces/proxy-9209/pods/http:proxy-service-s95kn-ztqbq:162/proxy/: bar (200; 21.564645ms) +Oct 27 15:32:48.863: INFO: (8) /api/v1/namespaces/proxy-9209/services/proxy-service-s95kn:portname1/proxy/: foo (200; 26.683535ms) +Oct 27 15:32:48.863: INFO: (8) /api/v1/namespaces/proxy-9209/services/http:proxy-service-s95kn:portname1/proxy/: foo (200; 26.652409ms) +Oct 27 15:32:48.863: INFO: (8) /api/v1/namespaces/proxy-9209/services/proxy-service-s95kn:portname2/proxy/: bar (200; 26.734283ms) +Oct 27 15:32:48.864: INFO: (8) /api/v1/namespaces/proxy-9209/services/http:proxy-service-s95kn:portname2/proxy/: bar (200; 27.210277ms) +Oct 27 15:32:48.934: INFO: (9) /api/v1/namespaces/proxy-9209/services/proxy-service-s95kn:portname1/proxy/: foo (200; 69.645292ms) +Oct 27 15:32:48.934: INFO: (9) /api/v1/namespaces/proxy-9209/pods/proxy-service-s95kn-ztqbq:162/proxy/: bar (200; 69.749295ms) +Oct 27 15:32:48.934: INFO: (9) /api/v1/namespaces/proxy-9209/pods/https:proxy-service-s95kn-ztqbq:443/proxy/: test<... (200; 69.933749ms) +Oct 27 15:32:48.934: INFO: (9) /api/v1/namespaces/proxy-9209/services/https:proxy-service-s95kn:tlsportname2/proxy/: tls qux (200; 69.900347ms) +Oct 27 15:32:48.934: INFO: (9) /api/v1/namespaces/proxy-9209/pods/proxy-service-s95kn-ztqbq:160/proxy/: foo (200; 69.848598ms) +Oct 27 15:32:48.934: INFO: (9) /api/v1/namespaces/proxy-9209/pods/proxy-service-s95kn-ztqbq/proxy/: test (200; 69.891001ms) +Oct 27 15:32:48.937: INFO: (9) /api/v1/namespaces/proxy-9209/services/proxy-service-s95kn:portname2/proxy/: bar (200; 73.113732ms) +Oct 27 15:32:48.937: INFO: (9) /api/v1/namespaces/proxy-9209/pods/http:proxy-service-s95kn-ztqbq:1080/proxy/: ... (200; 73.210609ms) +Oct 27 15:32:48.939: INFO: (9) /api/v1/namespaces/proxy-9209/services/http:proxy-service-s95kn:portname2/proxy/: bar (200; 74.709855ms) +Oct 27 15:32:48.960: INFO: (10) /api/v1/namespaces/proxy-9209/pods/http:proxy-service-s95kn-ztqbq:160/proxy/: foo (200; 20.884045ms) +Oct 27 15:32:48.960: INFO: (10) /api/v1/namespaces/proxy-9209/services/https:proxy-service-s95kn:tlsportname1/proxy/: tls baz (200; 20.899026ms) +Oct 27 15:32:48.960: INFO: (10) /api/v1/namespaces/proxy-9209/pods/https:proxy-service-s95kn-ztqbq:443/proxy/: test<... (200; 20.902023ms) +Oct 27 15:32:48.960: INFO: (10) /api/v1/namespaces/proxy-9209/pods/http:proxy-service-s95kn-ztqbq:162/proxy/: bar (200; 20.88097ms) +Oct 27 15:32:48.960: INFO: (10) /api/v1/namespaces/proxy-9209/pods/https:proxy-service-s95kn-ztqbq:460/proxy/: tls baz (200; 21.06164ms) +Oct 27 15:32:48.960: INFO: (10) /api/v1/namespaces/proxy-9209/pods/http:proxy-service-s95kn-ztqbq:1080/proxy/: ... (200; 20.925992ms) +Oct 27 15:32:48.960: INFO: (10) /api/v1/namespaces/proxy-9209/pods/proxy-service-s95kn-ztqbq:162/proxy/: bar (200; 21.04606ms) +Oct 27 15:32:48.960: INFO: (10) /api/v1/namespaces/proxy-9209/pods/https:proxy-service-s95kn-ztqbq:462/proxy/: tls qux (200; 21.160939ms) +Oct 27 15:32:48.960: INFO: (10) /api/v1/namespaces/proxy-9209/pods/proxy-service-s95kn-ztqbq:160/proxy/: foo (200; 20.969079ms) +Oct 27 15:32:48.960: INFO: (10) /api/v1/namespaces/proxy-9209/pods/proxy-service-s95kn-ztqbq/proxy/: test (200; 21.012976ms) +Oct 27 15:32:48.960: INFO: (10) /api/v1/namespaces/proxy-9209/services/https:proxy-service-s95kn:tlsportname2/proxy/: tls qux (200; 20.941828ms) +Oct 27 15:32:48.965: INFO: (10) /api/v1/namespaces/proxy-9209/services/http:proxy-service-s95kn:portname1/proxy/: foo (200; 26.13545ms) +Oct 27 15:32:48.965: INFO: (10) /api/v1/namespaces/proxy-9209/services/proxy-service-s95kn:portname1/proxy/: foo (200; 26.14275ms) +Oct 27 15:32:48.965: INFO: (10) /api/v1/namespaces/proxy-9209/services/http:proxy-service-s95kn:portname2/proxy/: bar (200; 26.22649ms) +Oct 27 15:32:49.029: INFO: (10) /api/v1/namespaces/proxy-9209/services/proxy-service-s95kn:portname2/proxy/: bar (200; 89.727871ms) +Oct 27 15:32:49.050: INFO: (11) /api/v1/namespaces/proxy-9209/pods/https:proxy-service-s95kn-ztqbq:460/proxy/: tls baz (200; 21.111467ms) +Oct 27 15:32:49.050: INFO: (11) /api/v1/namespaces/proxy-9209/pods/http:proxy-service-s95kn-ztqbq:1080/proxy/: ... (200; 21.358621ms) +Oct 27 15:32:49.050: INFO: (11) /api/v1/namespaces/proxy-9209/pods/http:proxy-service-s95kn-ztqbq:160/proxy/: foo (200; 21.216784ms) +Oct 27 15:32:49.050: INFO: (11) /api/v1/namespaces/proxy-9209/pods/https:proxy-service-s95kn-ztqbq:462/proxy/: tls qux (200; 21.190803ms) +Oct 27 15:32:49.050: INFO: (11) /api/v1/namespaces/proxy-9209/pods/http:proxy-service-s95kn-ztqbq:162/proxy/: bar (200; 21.424296ms) +Oct 27 15:32:49.050: INFO: (11) /api/v1/namespaces/proxy-9209/pods/proxy-service-s95kn-ztqbq/proxy/: test (200; 21.353672ms) +Oct 27 15:32:49.050: INFO: (11) /api/v1/namespaces/proxy-9209/pods/proxy-service-s95kn-ztqbq:162/proxy/: bar (200; 21.311359ms) +Oct 27 15:32:49.050: INFO: (11) /api/v1/namespaces/proxy-9209/pods/https:proxy-service-s95kn-ztqbq:443/proxy/: test<... (200; 21.2968ms) +Oct 27 15:32:49.055: INFO: (11) /api/v1/namespaces/proxy-9209/services/http:proxy-service-s95kn:portname2/proxy/: bar (200; 25.444262ms) +Oct 27 15:32:49.055: INFO: (11) /api/v1/namespaces/proxy-9209/services/proxy-service-s95kn:portname1/proxy/: foo (200; 25.622126ms) +Oct 27 15:32:49.055: INFO: (11) /api/v1/namespaces/proxy-9209/services/http:proxy-service-s95kn:portname1/proxy/: foo (200; 25.529176ms) +Oct 27 15:32:49.055: INFO: (11) /api/v1/namespaces/proxy-9209/services/proxy-service-s95kn:portname2/proxy/: bar (200; 25.506215ms) +Oct 27 15:32:49.130: INFO: (12) /api/v1/namespaces/proxy-9209/pods/http:proxy-service-s95kn-ztqbq:162/proxy/: bar (200; 75.348221ms) +Oct 27 15:32:49.130: INFO: (12) /api/v1/namespaces/proxy-9209/pods/proxy-service-s95kn-ztqbq:160/proxy/: foo (200; 75.238071ms) +Oct 27 15:32:49.130: INFO: (12) /api/v1/namespaces/proxy-9209/pods/https:proxy-service-s95kn-ztqbq:460/proxy/: tls baz (200; 75.4208ms) +Oct 27 15:32:49.130: INFO: (12) /api/v1/namespaces/proxy-9209/pods/https:proxy-service-s95kn-ztqbq:462/proxy/: tls qux (200; 75.324231ms) +Oct 27 15:32:49.130: INFO: (12) /api/v1/namespaces/proxy-9209/pods/http:proxy-service-s95kn-ztqbq:160/proxy/: foo (200; 75.477808ms) +Oct 27 15:32:49.137: INFO: (12) /api/v1/namespaces/proxy-9209/services/proxy-service-s95kn:portname2/proxy/: bar (200; 82.083978ms) +Oct 27 15:32:49.137: INFO: (12) /api/v1/namespaces/proxy-9209/services/http:proxy-service-s95kn:portname2/proxy/: bar (200; 82.061552ms) +Oct 27 15:32:49.137: INFO: (12) /api/v1/namespaces/proxy-9209/services/https:proxy-service-s95kn:tlsportname1/proxy/: tls baz (200; 81.986314ms) +Oct 27 15:32:49.137: INFO: (12) /api/v1/namespaces/proxy-9209/services/https:proxy-service-s95kn:tlsportname2/proxy/: tls qux (200; 82.113303ms) +Oct 27 15:32:49.137: INFO: (12) /api/v1/namespaces/proxy-9209/services/proxy-service-s95kn:portname1/proxy/: foo (200; 82.160926ms) +Oct 27 15:32:49.137: INFO: (12) /api/v1/namespaces/proxy-9209/services/http:proxy-service-s95kn:portname1/proxy/: foo (200; 82.182654ms) +Oct 27 15:32:49.138: INFO: (12) /api/v1/namespaces/proxy-9209/pods/proxy-service-s95kn-ztqbq:1080/proxy/: test<... (200; 82.917919ms) +Oct 27 15:32:49.138: INFO: (12) /api/v1/namespaces/proxy-9209/pods/https:proxy-service-s95kn-ztqbq:443/proxy/: ... (200; 83.218938ms) +Oct 27 15:32:49.138: INFO: (12) /api/v1/namespaces/proxy-9209/pods/proxy-service-s95kn-ztqbq:162/proxy/: bar (200; 83.380363ms) +Oct 27 15:32:49.138: INFO: (12) /api/v1/namespaces/proxy-9209/pods/proxy-service-s95kn-ztqbq/proxy/: test (200; 83.430148ms) +Oct 27 15:32:49.160: INFO: (13) /api/v1/namespaces/proxy-9209/pods/http:proxy-service-s95kn-ztqbq:162/proxy/: bar (200; 21.558423ms) +Oct 27 15:32:49.160: INFO: (13) /api/v1/namespaces/proxy-9209/pods/proxy-service-s95kn-ztqbq:162/proxy/: bar (200; 21.575631ms) +Oct 27 15:32:49.160: INFO: (13) /api/v1/namespaces/proxy-9209/pods/https:proxy-service-s95kn-ztqbq:460/proxy/: tls baz (200; 21.557651ms) +Oct 27 15:32:49.160: INFO: (13) /api/v1/namespaces/proxy-9209/pods/https:proxy-service-s95kn-ztqbq:462/proxy/: tls qux (200; 21.617222ms) +Oct 27 15:32:49.160: INFO: (13) /api/v1/namespaces/proxy-9209/pods/proxy-service-s95kn-ztqbq:1080/proxy/: test<... (200; 21.634818ms) +Oct 27 15:32:49.160: INFO: (13) /api/v1/namespaces/proxy-9209/pods/https:proxy-service-s95kn-ztqbq:443/proxy/: test (200; 21.543575ms) +Oct 27 15:32:49.160: INFO: (13) /api/v1/namespaces/proxy-9209/pods/http:proxy-service-s95kn-ztqbq:1080/proxy/: ... (200; 21.737486ms) +Oct 27 15:32:49.160: INFO: (13) /api/v1/namespaces/proxy-9209/services/proxy-service-s95kn:portname1/proxy/: foo (200; 21.626684ms) +Oct 27 15:32:49.160: INFO: (13) /api/v1/namespaces/proxy-9209/services/https:proxy-service-s95kn:tlsportname1/proxy/: tls baz (200; 21.680843ms) +Oct 27 15:32:49.160: INFO: (13) /api/v1/namespaces/proxy-9209/pods/proxy-service-s95kn-ztqbq:160/proxy/: foo (200; 21.592684ms) +Oct 27 15:32:49.166: INFO: (13) /api/v1/namespaces/proxy-9209/services/http:proxy-service-s95kn:portname1/proxy/: foo (200; 27.346701ms) +Oct 27 15:32:49.229: INFO: (13) /api/v1/namespaces/proxy-9209/services/proxy-service-s95kn:portname2/proxy/: bar (200; 90.159091ms) +Oct 27 15:32:49.229: INFO: (13) /api/v1/namespaces/proxy-9209/pods/http:proxy-service-s95kn-ztqbq:160/proxy/: foo (200; 90.23694ms) +Oct 27 15:32:49.229: INFO: (13) /api/v1/namespaces/proxy-9209/services/http:proxy-service-s95kn:portname2/proxy/: bar (200; 90.18516ms) +Oct 27 15:32:49.251: INFO: (14) /api/v1/namespaces/proxy-9209/services/https:proxy-service-s95kn:tlsportname2/proxy/: tls qux (200; 21.761454ms) +Oct 27 15:32:49.251: INFO: (14) /api/v1/namespaces/proxy-9209/pods/https:proxy-service-s95kn-ztqbq:443/proxy/: test<... (200; 21.744309ms) +Oct 27 15:32:49.251: INFO: (14) /api/v1/namespaces/proxy-9209/pods/https:proxy-service-s95kn-ztqbq:462/proxy/: tls qux (200; 21.891057ms) +Oct 27 15:32:49.251: INFO: (14) /api/v1/namespaces/proxy-9209/pods/http:proxy-service-s95kn-ztqbq:1080/proxy/: ... (200; 21.97094ms) +Oct 27 15:32:49.251: INFO: (14) /api/v1/namespaces/proxy-9209/pods/http:proxy-service-s95kn-ztqbq:160/proxy/: foo (200; 22.092991ms) +Oct 27 15:32:49.251: INFO: (14) /api/v1/namespaces/proxy-9209/pods/https:proxy-service-s95kn-ztqbq:460/proxy/: tls baz (200; 22.073607ms) +Oct 27 15:32:49.251: INFO: (14) /api/v1/namespaces/proxy-9209/pods/http:proxy-service-s95kn-ztqbq:162/proxy/: bar (200; 22.039169ms) +Oct 27 15:32:49.251: INFO: (14) /api/v1/namespaces/proxy-9209/pods/proxy-service-s95kn-ztqbq:162/proxy/: bar (200; 22.204808ms) +Oct 27 15:32:49.251: INFO: (14) /api/v1/namespaces/proxy-9209/services/https:proxy-service-s95kn:tlsportname1/proxy/: tls baz (200; 22.131901ms) +Oct 27 15:32:49.251: INFO: (14) /api/v1/namespaces/proxy-9209/pods/proxy-service-s95kn-ztqbq:160/proxy/: foo (200; 22.065449ms) +Oct 27 15:32:49.251: INFO: (14) /api/v1/namespaces/proxy-9209/pods/proxy-service-s95kn-ztqbq/proxy/: test (200; 22.030255ms) +Oct 27 15:32:49.254: INFO: (14) /api/v1/namespaces/proxy-9209/services/http:proxy-service-s95kn:portname2/proxy/: bar (200; 25.413352ms) +Oct 27 15:32:49.254: INFO: (14) /api/v1/namespaces/proxy-9209/services/http:proxy-service-s95kn:portname1/proxy/: foo (200; 25.563804ms) +Oct 27 15:32:49.255: INFO: (14) /api/v1/namespaces/proxy-9209/services/proxy-service-s95kn:portname1/proxy/: foo (200; 25.636198ms) +Oct 27 15:32:49.255: INFO: (14) /api/v1/namespaces/proxy-9209/services/proxy-service-s95kn:portname2/proxy/: bar (200; 25.578973ms) +Oct 27 15:32:49.276: INFO: (15) /api/v1/namespaces/proxy-9209/pods/http:proxy-service-s95kn-ztqbq:162/proxy/: bar (200; 20.79553ms) +Oct 27 15:32:49.276: INFO: (15) /api/v1/namespaces/proxy-9209/pods/proxy-service-s95kn-ztqbq:1080/proxy/: test<... (200; 20.797513ms) +Oct 27 15:32:49.276: INFO: (15) /api/v1/namespaces/proxy-9209/pods/proxy-service-s95kn-ztqbq:162/proxy/: bar (200; 20.716191ms) +Oct 27 15:32:49.276: INFO: (15) /api/v1/namespaces/proxy-9209/pods/https:proxy-service-s95kn-ztqbq:462/proxy/: tls qux (200; 20.855974ms) +Oct 27 15:32:49.276: INFO: (15) /api/v1/namespaces/proxy-9209/pods/proxy-service-s95kn-ztqbq:160/proxy/: foo (200; 21.053751ms) +Oct 27 15:32:49.276: INFO: (15) /api/v1/namespaces/proxy-9209/services/https:proxy-service-s95kn:tlsportname2/proxy/: tls qux (200; 21.168182ms) +Oct 27 15:32:49.276: INFO: (15) /api/v1/namespaces/proxy-9209/pods/http:proxy-service-s95kn-ztqbq:1080/proxy/: ... (200; 20.740932ms) +Oct 27 15:32:49.276: INFO: (15) /api/v1/namespaces/proxy-9209/pods/https:proxy-service-s95kn-ztqbq:460/proxy/: tls baz (200; 21.019187ms) +Oct 27 15:32:49.276: INFO: (15) /api/v1/namespaces/proxy-9209/services/https:proxy-service-s95kn:tlsportname1/proxy/: tls baz (200; 20.833604ms) +Oct 27 15:32:49.276: INFO: (15) /api/v1/namespaces/proxy-9209/pods/http:proxy-service-s95kn-ztqbq:160/proxy/: foo (200; 20.962332ms) +Oct 27 15:32:49.276: INFO: (15) /api/v1/namespaces/proxy-9209/pods/proxy-service-s95kn-ztqbq/proxy/: test (200; 20.988004ms) +Oct 27 15:32:49.276: INFO: (15) /api/v1/namespaces/proxy-9209/pods/https:proxy-service-s95kn-ztqbq:443/proxy/: ... (200; 16.855641ms) +Oct 27 15:32:49.352: INFO: (16) /api/v1/namespaces/proxy-9209/pods/proxy-service-s95kn-ztqbq:160/proxy/: foo (200; 17.351389ms) +Oct 27 15:32:49.352: INFO: (16) /api/v1/namespaces/proxy-9209/pods/proxy-service-s95kn-ztqbq:1080/proxy/: test<... (200; 17.405539ms) +Oct 27 15:32:49.352: INFO: (16) /api/v1/namespaces/proxy-9209/pods/https:proxy-service-s95kn-ztqbq:462/proxy/: tls qux (200; 17.336335ms) +Oct 27 15:32:49.352: INFO: (16) /api/v1/namespaces/proxy-9209/pods/https:proxy-service-s95kn-ztqbq:443/proxy/: test (200; 17.854691ms) +Oct 27 15:32:49.354: INFO: (16) /api/v1/namespaces/proxy-9209/services/http:proxy-service-s95kn:portname1/proxy/: foo (200; 19.5125ms) +Oct 27 15:32:49.354: INFO: (16) /api/v1/namespaces/proxy-9209/services/https:proxy-service-s95kn:tlsportname2/proxy/: tls qux (200; 19.561989ms) +Oct 27 15:32:49.354: INFO: (16) /api/v1/namespaces/proxy-9209/services/https:proxy-service-s95kn:tlsportname1/proxy/: tls baz (200; 19.494853ms) +Oct 27 15:32:49.354: INFO: (16) /api/v1/namespaces/proxy-9209/services/http:proxy-service-s95kn:portname2/proxy/: bar (200; 19.535089ms) +Oct 27 15:32:49.355: INFO: (16) /api/v1/namespaces/proxy-9209/services/proxy-service-s95kn:portname2/proxy/: bar (200; 20.6956ms) +Oct 27 15:32:49.358: INFO: (16) /api/v1/namespaces/proxy-9209/pods/http:proxy-service-s95kn-ztqbq:160/proxy/: foo (200; 23.630334ms) +Oct 27 15:32:49.358: INFO: (16) /api/v1/namespaces/proxy-9209/services/proxy-service-s95kn:portname1/proxy/: foo (200; 24.277822ms) +Oct 27 15:32:49.433: INFO: (17) /api/v1/namespaces/proxy-9209/pods/proxy-service-s95kn-ztqbq:162/proxy/: bar (200; 74.600822ms) +Oct 27 15:32:49.433: INFO: (17) /api/v1/namespaces/proxy-9209/services/https:proxy-service-s95kn:tlsportname1/proxy/: tls baz (200; 74.687032ms) +Oct 27 15:32:49.433: INFO: (17) /api/v1/namespaces/proxy-9209/pods/http:proxy-service-s95kn-ztqbq:160/proxy/: foo (200; 74.692654ms) +Oct 27 15:32:49.433: INFO: (17) /api/v1/namespaces/proxy-9209/pods/http:proxy-service-s95kn-ztqbq:162/proxy/: bar (200; 74.712494ms) +Oct 27 15:32:49.433: INFO: (17) /api/v1/namespaces/proxy-9209/services/https:proxy-service-s95kn:tlsportname2/proxy/: tls qux (200; 74.781034ms) +Oct 27 15:32:49.433: INFO: (17) /api/v1/namespaces/proxy-9209/pods/proxy-service-s95kn-ztqbq:160/proxy/: foo (200; 74.957194ms) +Oct 27 15:32:49.433: INFO: (17) /api/v1/namespaces/proxy-9209/pods/https:proxy-service-s95kn-ztqbq:462/proxy/: tls qux (200; 74.732702ms) +Oct 27 15:32:49.433: INFO: (17) /api/v1/namespaces/proxy-9209/pods/https:proxy-service-s95kn-ztqbq:460/proxy/: tls baz (200; 74.859281ms) +Oct 27 15:32:49.433: INFO: (17) /api/v1/namespaces/proxy-9209/pods/proxy-service-s95kn-ztqbq/proxy/: test (200; 74.885712ms) +Oct 27 15:32:49.434: INFO: (17) /api/v1/namespaces/proxy-9209/pods/proxy-service-s95kn-ztqbq:1080/proxy/: test<... (200; 75.121534ms) +Oct 27 15:32:49.434: INFO: (17) /api/v1/namespaces/proxy-9209/pods/https:proxy-service-s95kn-ztqbq:443/proxy/: ... (200; 75.210491ms) +Oct 27 15:32:49.438: INFO: (17) /api/v1/namespaces/proxy-9209/services/proxy-service-s95kn:portname1/proxy/: foo (200; 79.068005ms) +Oct 27 15:32:49.438: INFO: (17) /api/v1/namespaces/proxy-9209/services/proxy-service-s95kn:portname2/proxy/: bar (200; 79.064437ms) +Oct 27 15:32:49.438: INFO: (17) /api/v1/namespaces/proxy-9209/services/http:proxy-service-s95kn:portname1/proxy/: foo (200; 79.067343ms) +Oct 27 15:32:49.438: INFO: (17) /api/v1/namespaces/proxy-9209/services/http:proxy-service-s95kn:portname2/proxy/: bar (200; 79.161771ms) +Oct 27 15:32:49.459: INFO: (18) /api/v1/namespaces/proxy-9209/services/https:proxy-service-s95kn:tlsportname2/proxy/: tls qux (200; 21.421178ms) +Oct 27 15:32:49.459: INFO: (18) /api/v1/namespaces/proxy-9209/pods/proxy-service-s95kn-ztqbq:162/proxy/: bar (200; 21.502961ms) +Oct 27 15:32:49.459: INFO: (18) /api/v1/namespaces/proxy-9209/pods/https:proxy-service-s95kn-ztqbq:460/proxy/: tls baz (200; 21.417118ms) +Oct 27 15:32:49.459: INFO: (18) /api/v1/namespaces/proxy-9209/services/https:proxy-service-s95kn:tlsportname1/proxy/: tls baz (200; 21.522882ms) +Oct 27 15:32:49.459: INFO: (18) /api/v1/namespaces/proxy-9209/pods/https:proxy-service-s95kn-ztqbq:462/proxy/: tls qux (200; 21.545024ms) +Oct 27 15:32:49.529: INFO: (18) /api/v1/namespaces/proxy-9209/services/proxy-service-s95kn:portname2/proxy/: bar (200; 90.763046ms) +Oct 27 15:32:49.529: INFO: (18) /api/v1/namespaces/proxy-9209/pods/proxy-service-s95kn-ztqbq:1080/proxy/: test<... (200; 90.995013ms) +Oct 27 15:32:49.529: INFO: (18) /api/v1/namespaces/proxy-9209/pods/proxy-service-s95kn-ztqbq:160/proxy/: foo (200; 90.984525ms) +Oct 27 15:32:49.529: INFO: (18) /api/v1/namespaces/proxy-9209/pods/http:proxy-service-s95kn-ztqbq:1080/proxy/: ... (200; 90.965599ms) +Oct 27 15:32:49.530: INFO: (18) /api/v1/namespaces/proxy-9209/pods/http:proxy-service-s95kn-ztqbq:160/proxy/: foo (200; 92.041932ms) +Oct 27 15:32:49.530: INFO: (18) /api/v1/namespaces/proxy-9209/pods/proxy-service-s95kn-ztqbq/proxy/: test (200; 92.002435ms) +Oct 27 15:32:49.530: INFO: (18) /api/v1/namespaces/proxy-9209/pods/http:proxy-service-s95kn-ztqbq:162/proxy/: bar (200; 92.040688ms) +Oct 27 15:32:49.530: INFO: (18) /api/v1/namespaces/proxy-9209/pods/https:proxy-service-s95kn-ztqbq:443/proxy/: test (200; 15.432159ms) +Oct 27 15:32:49.549: INFO: (19) /api/v1/namespaces/proxy-9209/pods/http:proxy-service-s95kn-ztqbq:160/proxy/: foo (200; 15.771655ms) +Oct 27 15:32:49.549: INFO: (19) /api/v1/namespaces/proxy-9209/pods/http:proxy-service-s95kn-ztqbq:162/proxy/: bar (200; 15.90163ms) +Oct 27 15:32:49.559: INFO: (19) /api/v1/namespaces/proxy-9209/services/proxy-service-s95kn:portname1/proxy/: foo (200; 25.657855ms) +Oct 27 15:32:49.559: INFO: (19) /api/v1/namespaces/proxy-9209/services/https:proxy-service-s95kn:tlsportname1/proxy/: tls baz (200; 25.65361ms) +Oct 27 15:32:49.559: INFO: (19) /api/v1/namespaces/proxy-9209/services/http:proxy-service-s95kn:portname1/proxy/: foo (200; 25.858694ms) +Oct 27 15:32:49.559: INFO: (19) /api/v1/namespaces/proxy-9209/pods/http:proxy-service-s95kn-ztqbq:1080/proxy/: ... (200; 25.72171ms) +Oct 27 15:32:49.559: INFO: (19) /api/v1/namespaces/proxy-9209/pods/proxy-service-s95kn-ztqbq:162/proxy/: bar (200; 25.718563ms) +Oct 27 15:32:49.559: INFO: (19) /api/v1/namespaces/proxy-9209/services/proxy-service-s95kn:portname2/proxy/: bar (200; 25.697917ms) +Oct 27 15:32:49.559: INFO: (19) /api/v1/namespaces/proxy-9209/pods/proxy-service-s95kn-ztqbq:1080/proxy/: test<... (200; 25.716543ms) +Oct 27 15:32:49.605: INFO: (19) /api/v1/namespaces/proxy-9209/services/http:proxy-service-s95kn:portname2/proxy/: bar (200; 71.769428ms) +STEP: deleting ReplicationController proxy-service-s95kn in namespace proxy-9209, will wait for the garbage collector to delete the pods +Oct 27 15:32:49.681: INFO: Deleting ReplicationController proxy-service-s95kn took: 13.461882ms +Oct 27 15:32:49.782: INFO: Terminating ReplicationController proxy-service-s95kn pods took: 101.137842ms +[AfterEach] version v1 + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:32:51.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "proxy-9209" for this suite. +•{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":346,"completed":312,"skipped":5557,"failed":0} +SSSSSS +------------------------------ +[sig-node] Container Runtime blackbox test on terminated container + should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:32:51.508: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-runtime +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-7267 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the container +STEP: wait for the container to reach Succeeded +STEP: get the container status +STEP: the container should be terminated +STEP: the termination message should be set +Oct 27 15:32:54.789: INFO: Expected: &{} to match Container's Termination Message: -- +STEP: delete the container +[AfterEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:32:54.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-runtime-7267" for this suite. +•{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":346,"completed":313,"skipped":5563,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Container Runtime blackbox test when starting a container that exits + should run with the expected status [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:32:54.858: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-runtime +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-6594 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should run with the expected status [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' +STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' +STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition +STEP: Container 'terminate-cmd-rpa': should get the expected 'State' +STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] +STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' +STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' +STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition +STEP: Container 'terminate-cmd-rpof': should get the expected 'State' +STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] +STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' +STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' +STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition +STEP: Container 'terminate-cmd-rpn': should get the expected 'State' +STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] +[AfterEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:33:20.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-runtime-6594" for this suite. +•{"msg":"PASSED [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":346,"completed":314,"skipped":5609,"failed":0} +SSSS +------------------------------ +[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + should be able to convert a non homogeneous list of CRs [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:33:20.770: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename crd-webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-webhook-7610 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 +STEP: Setting up server cert +STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication +STEP: Deploying the custom resource conversion webhook pod +STEP: Wait for the deployment to be ready +Oct 27 15:33:21.420: INFO: new replicaset for deployment "sample-crd-conversion-webhook-deployment" is yet to be created +Oct 27 15:33:23.456: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770945601, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770945601, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770945601, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770945601, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-697cdbd8f4\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 27 15:33:26.488: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 +[It] should be able to convert a non homogeneous list of CRs [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:33:26.501: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Creating a v1 custom resource +STEP: Create a v2 custom resource +STEP: List CRs in v1 +STEP: List CRs in v2 +[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:33:30.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-webhook-7610" for this suite. +[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 +•{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":346,"completed":315,"skipped":5613,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:33:30.942: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename statefulset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-4092 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:92 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:107 +STEP: Creating service test in namespace statefulset-4092 +[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Initializing watcher for selector baz=blah,foo=bar +STEP: Creating stateful set ss in namespace statefulset-4092 +STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-4092 +Oct 27 15:33:31.188: INFO: Found 0 stateful pods, waiting for 1 +Oct 27 15:33:41.204: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod +Oct 27 15:33:41.217: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-4092 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Oct 27 15:33:41.565: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Oct 27 15:33:41.565: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Oct 27 15:33:41.565: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Oct 27 15:33:41.577: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true +Oct 27 15:33:51.591: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false +Oct 27 15:33:51.591: INFO: Waiting for statefulset status.replicas updated to 0 +Oct 27 15:33:51.638: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999724s +Oct 27 15:33:52.652: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.987625385s +Oct 27 15:33:53.664: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.974685313s +Oct 27 15:33:54.677: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.961976376s +Oct 27 15:33:55.689: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.949713656s +Oct 27 15:33:56.704: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.937465962s +Oct 27 15:33:57.716: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.922221394s +Oct 27 15:33:58.729: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.909901232s +Oct 27 15:33:59.742: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.897313858s +Oct 27 15:34:00.755: INFO: Verifying statefulset ss doesn't scale past 1 for another 884.628319ms +STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4092 +Oct 27 15:34:01.768: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-4092 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:34:02.097: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Oct 27 15:34:02.097: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Oct 27 15:34:02.097: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Oct 27 15:34:02.109: INFO: Found 1 stateful pods, waiting for 3 +Oct 27 15:34:12.122: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +Oct 27 15:34:12.122: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true +Oct 27 15:34:12.122: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true +STEP: Verifying that stateful set ss was scaled up in order +STEP: Scale down will halt with unhealthy stateful pod +Oct 27 15:34:12.146: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-4092 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Oct 27 15:34:12.555: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Oct 27 15:34:12.555: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Oct 27 15:34:12.555: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Oct 27 15:34:12.555: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-4092 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Oct 27 15:34:12.936: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Oct 27 15:34:12.936: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Oct 27 15:34:12.936: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Oct 27 15:34:12.936: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-4092 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Oct 27 15:34:13.330: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Oct 27 15:34:13.330: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Oct 27 15:34:13.330: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Oct 27 15:34:13.330: INFO: Waiting for statefulset status.replicas updated to 0 +Oct 27 15:34:13.433: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 +Oct 27 15:34:23.458: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false +Oct 27 15:34:23.458: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false +Oct 27 15:34:23.458: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false +Oct 27 15:34:23.494: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999636s +Oct 27 15:34:24.509: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.987989882s +Oct 27 15:34:25.521: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.973688208s +Oct 27 15:34:26.534: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.961159035s +Oct 27 15:34:27.547: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.948163207s +Oct 27 15:34:28.560: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.935072067s +Oct 27 15:34:29.573: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.92222657s +Oct 27 15:34:30.586: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.909542965s +Oct 27 15:34:31.598: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.896815698s +Oct 27 15:34:32.611: INFO: Verifying statefulset ss doesn't scale past 3 for another 884.132634ms +STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4092 +Oct 27 15:34:33.625: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-4092 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:34:33.994: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Oct 27 15:34:33.994: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Oct 27 15:34:33.994: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Oct 27 15:34:33.994: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-4092 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:34:34.306: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Oct 27 15:34:34.306: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Oct 27 15:34:34.306: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Oct 27 15:34:34.307: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-4092 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 27 15:34:34.636: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Oct 27 15:34:34.636: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Oct 27 15:34:34.636: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Oct 27 15:34:34.636: INFO: Scaling statefulset ss to 0 +STEP: Verifying that stateful set ss was scaled down in reverse order +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118 +Oct 27 15:34:44.685: INFO: Deleting all statefulset in ns statefulset-4092 +Oct 27 15:34:44.697: INFO: Scaling statefulset ss to 0 +Oct 27 15:34:44.733: INFO: Waiting for statefulset status.replicas updated to 0 +Oct 27 15:34:44.745: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:34:44.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-4092" for this suite. +•{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":346,"completed":316,"skipped":5623,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should be able to change the type from NodePort to ExternalName [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:34:44.815: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-1216 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should be able to change the type from NodePort to ExternalName [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a service nodeport-service with the type=NodePort in namespace services-1216 +STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service +STEP: creating service externalsvc in namespace services-1216 +STEP: creating replication controller externalsvc in namespace services-1216 +I1027 15:34:45.065221 5683 runners.go:190] Created replication controller with name: externalsvc, namespace: services-1216, replica count: 2 +I1027 15:34:48.117033 5683 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +STEP: changing the NodePort service to type=ExternalName +Oct 27 15:34:48.164: INFO: Creating new exec pod +Oct 27 15:34:52.208: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-1216 exec execpods96h2 -- /bin/sh -x -c nslookup nodeport-service.services-1216.svc.cluster.local' +Oct 27 15:34:52.652: INFO: stderr: "+ nslookup nodeport-service.services-1216.svc.cluster.local\n" +Oct 27 15:34:52.652: INFO: stdout: "Server:\t\t100.64.0.10\nAddress:\t100.64.0.10#53\n\nnodeport-service.services-1216.svc.cluster.local\tcanonical name = externalsvc.services-1216.svc.cluster.local.\nName:\texternalsvc.services-1216.svc.cluster.local\nAddress: 100.65.182.154\n\n" +STEP: deleting ReplicationController externalsvc in namespace services-1216, will wait for the garbage collector to delete the pods +Oct 27 15:34:52.728: INFO: Deleting ReplicationController externalsvc took: 13.52589ms +Oct 27 15:34:52.829: INFO: Terminating ReplicationController externalsvc pods took: 100.779026ms +Oct 27 15:34:54.753: INFO: Cleaning up the NodePort to ExternalName test service +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:34:54.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-1216" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":346,"completed":317,"skipped":5652,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Probing container + should have monotonically increasing restart count [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:34:54.802: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-probe +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-7994 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 +[It] should have monotonically increasing restart count [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating pod liveness-96e7c97f-3553-4f44-a2a3-29afa96dd8c6 in namespace container-probe-7994 +Oct 27 15:34:57.041: INFO: Started pod liveness-96e7c97f-3553-4f44-a2a3-29afa96dd8c6 in namespace container-probe-7994 +STEP: checking the pod's current state and verifying that restartCount is present +Oct 27 15:34:57.052: INFO: Initial restart count of pod liveness-96e7c97f-3553-4f44-a2a3-29afa96dd8c6 is 0 +Oct 27 15:35:17.231: INFO: Restart count of pod container-probe-7994/liveness-96e7c97f-3553-4f44-a2a3-29afa96dd8c6 is now 1 (20.178201756s elapsed) +Oct 27 15:35:37.357: INFO: Restart count of pod container-probe-7994/liveness-96e7c97f-3553-4f44-a2a3-29afa96dd8c6 is now 2 (40.304438235s elapsed) +Oct 27 15:35:55.475: INFO: Restart count of pod container-probe-7994/liveness-96e7c97f-3553-4f44-a2a3-29afa96dd8c6 is now 3 (58.422958271s elapsed) +Oct 27 15:36:17.631: INFO: Restart count of pod container-probe-7994/liveness-96e7c97f-3553-4f44-a2a3-29afa96dd8c6 is now 4 (1m20.578934666s elapsed) +Oct 27 15:37:30.109: INFO: Restart count of pod container-probe-7994/liveness-96e7c97f-3553-4f44-a2a3-29afa96dd8c6 is now 5 (2m33.056096171s elapsed) +STEP: deleting the pod +[AfterEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:37:30.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-7994" for this suite. +•{"msg":"PASSED [sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":346,"completed":318,"skipped":5700,"failed":0} +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for multiple CRDs of different groups [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:37:30.159: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-4086 +STEP: Waiting for a default service account to be provisioned in namespace +[It] works for multiple CRDs of different groups [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation +Oct 27 15:37:30.353: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 27 15:37:34.517: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:37:49.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-4086" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":346,"completed":319,"skipped":5718,"failed":0} + +------------------------------ +[sig-storage] ConfigMap + should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:37:49.976: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-1643 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name configmap-test-volume-d2097edb-63e3-4d3b-95f0-f1a1d9e06c7e +STEP: Creating a pod to test consume configMaps +Oct 27 15:37:50.215: INFO: Waiting up to 5m0s for pod "pod-configmaps-87ba251c-547b-4b99-bd42-907e24103c52" in namespace "configmap-1643" to be "Succeeded or Failed" +Oct 27 15:37:50.227: INFO: Pod "pod-configmaps-87ba251c-547b-4b99-bd42-907e24103c52": Phase="Pending", Reason="", readiness=false. Elapsed: 11.495041ms +Oct 27 15:37:52.239: INFO: Pod "pod-configmaps-87ba251c-547b-4b99-bd42-907e24103c52": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.024297783s +STEP: Saw pod success +Oct 27 15:37:52.239: INFO: Pod "pod-configmaps-87ba251c-547b-4b99-bd42-907e24103c52" satisfied condition "Succeeded or Failed" +Oct 27 15:37:52.251: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod pod-configmaps-87ba251c-547b-4b99-bd42-907e24103c52 container configmap-volume-test: +STEP: delete the pod +Oct 27 15:37:52.293: INFO: Waiting for pod pod-configmaps-87ba251c-547b-4b99-bd42-907e24103c52 to disappear +Oct 27 15:37:52.304: INFO: Pod pod-configmaps-87ba251c-547b-4b99-bd42-907e24103c52 no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:37:52.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-1643" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":346,"completed":320,"skipped":5718,"failed":0} +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:37:52.337: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-1850 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0666 on tmpfs +Oct 27 15:37:52.550: INFO: Waiting up to 5m0s for pod "pod-92e09f20-7e42-4628-ba56-05b284c02cfa" in namespace "emptydir-1850" to be "Succeeded or Failed" +Oct 27 15:37:52.562: INFO: Pod "pod-92e09f20-7e42-4628-ba56-05b284c02cfa": Phase="Pending", Reason="", readiness=false. Elapsed: 12.145832ms +Oct 27 15:37:54.574: INFO: Pod "pod-92e09f20-7e42-4628-ba56-05b284c02cfa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024490748s +Oct 27 15:37:56.586: INFO: Pod "pod-92e09f20-7e42-4628-ba56-05b284c02cfa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036600505s +STEP: Saw pod success +Oct 27 15:37:56.586: INFO: Pod "pod-92e09f20-7e42-4628-ba56-05b284c02cfa" satisfied condition "Succeeded or Failed" +Oct 27 15:37:56.598: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod pod-92e09f20-7e42-4628-ba56-05b284c02cfa container test-container: +STEP: delete the pod +Oct 27 15:37:56.709: INFO: Waiting for pod pod-92e09f20-7e42-4628-ba56-05b284c02cfa to disappear +Oct 27 15:37:56.720: INFO: Pod pod-92e09f20-7e42-4628-ba56-05b284c02cfa no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:37:56.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-1850" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":321,"skipped":5735,"failed":0} +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Probing container + should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:37:56.754: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-probe +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-9194 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 +[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating pod liveness-c5b67e29-6058-4457-b08d-5cc86af62bb0 in namespace container-probe-9194 +Oct 27 15:38:01.000: INFO: Started pod liveness-c5b67e29-6058-4457-b08d-5cc86af62bb0 in namespace container-probe-9194 +STEP: checking the pod's current state and verifying that restartCount is present +Oct 27 15:38:01.012: INFO: Initial restart count of pod liveness-c5b67e29-6058-4457-b08d-5cc86af62bb0 is 0 +Oct 27 15:38:19.137: INFO: Restart count of pod container-probe-9194/liveness-c5b67e29-6058-4457-b08d-5cc86af62bb0 is now 1 (18.124652407s elapsed) +STEP: deleting the pod +[AfterEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:38:19.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-9194" for this suite. +•{"msg":"PASSED [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":346,"completed":322,"skipped":5755,"failed":0} +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Probing container + with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:38:19.187: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-probe +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-8942 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 +[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:38:19.415: INFO: The status of Pod test-webserver-29a14285-7965-4671-9373-ae04a796759f is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:38:21.428: INFO: The status of Pod test-webserver-29a14285-7965-4671-9373-ae04a796759f is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:38:23.427: INFO: The status of Pod test-webserver-29a14285-7965-4671-9373-ae04a796759f is Running (Ready = false) +Oct 27 15:38:25.427: INFO: The status of Pod test-webserver-29a14285-7965-4671-9373-ae04a796759f is Running (Ready = false) +Oct 27 15:38:27.427: INFO: The status of Pod test-webserver-29a14285-7965-4671-9373-ae04a796759f is Running (Ready = false) +Oct 27 15:38:29.427: INFO: The status of Pod test-webserver-29a14285-7965-4671-9373-ae04a796759f is Running (Ready = false) +Oct 27 15:38:31.428: INFO: The status of Pod test-webserver-29a14285-7965-4671-9373-ae04a796759f is Running (Ready = false) +Oct 27 15:38:33.428: INFO: The status of Pod test-webserver-29a14285-7965-4671-9373-ae04a796759f is Running (Ready = false) +Oct 27 15:38:35.428: INFO: The status of Pod test-webserver-29a14285-7965-4671-9373-ae04a796759f is Running (Ready = false) +Oct 27 15:38:37.427: INFO: The status of Pod test-webserver-29a14285-7965-4671-9373-ae04a796759f is Running (Ready = false) +Oct 27 15:38:39.427: INFO: The status of Pod test-webserver-29a14285-7965-4671-9373-ae04a796759f is Running (Ready = false) +Oct 27 15:38:41.429: INFO: The status of Pod test-webserver-29a14285-7965-4671-9373-ae04a796759f is Running (Ready = true) +Oct 27 15:38:41.440: INFO: Container started at 2021-10-27 15:38:20 +0000 UTC, pod became ready at 2021-10-27 15:38:39 +0000 UTC +[AfterEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:38:41.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-8942" for this suite. +•{"msg":"PASSED [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":346,"completed":323,"skipped":5775,"failed":0} +SS +------------------------------ +[sig-api-machinery] Namespaces [Serial] + should patch a Namespace [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:38:41.474: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename namespaces +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in namespaces-8299 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should patch a Namespace [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a Namespace +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nspatchtest-db7031b4-0d43-49bc-938f-664c1ec18d51-9684 +STEP: patching the Namespace +STEP: get the Namespace and ensuring it has the label +[AfterEach] [sig-api-machinery] Namespaces [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:38:41.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "namespaces-8299" for this suite. +STEP: Destroying namespace "nspatchtest-db7031b4-0d43-49bc-938f-664c1ec18d51-9684" for this suite. +•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":346,"completed":324,"skipped":5777,"failed":0} +SSSSSSSSS +------------------------------ +[sig-apps] DisruptionController + should block an eviction until the PDB is updated to allow it [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:38:41.895: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename disruption +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in disruption-5343 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 +[It] should block an eviction until the PDB is updated to allow it [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pdb that targets all three pods in a test replica set +STEP: Waiting for the pdb to be processed +STEP: First trying to evict a pod which shouldn't be evictable +STEP: Waiting for all pods to be running +Oct 27 15:38:44.149: INFO: pods: 0 < 3 +Oct 27 15:38:46.164: INFO: running pods: 0 < 3 +STEP: locating a running pod +STEP: Updating the pdb to allow a pod to be evicted +STEP: Waiting for the pdb to be processed +STEP: Trying to evict the same pod we tried earlier which should now be evictable +STEP: Waiting for all pods to be running +STEP: Waiting for the pdb to observed all healthy pods +STEP: Patching the pdb to disallow a pod to be evicted +STEP: Waiting for the pdb to be processed +STEP: Waiting for all pods to be running +Oct 27 15:38:48.329: INFO: running pods: 2 < 3 +STEP: locating a running pod +STEP: Deleting the pdb to allow a pod to be evicted +STEP: Waiting for the pdb to be deleted +STEP: Trying to evict the same pod we tried earlier which should now be evictable +STEP: Waiting for all pods to be running +[AfterEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:38:50.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "disruption-5343" for this suite. +•{"msg":"PASSED [sig-apps] DisruptionController should block an eviction until the PDB is updated to allow it [Conformance]","total":346,"completed":325,"skipped":5786,"failed":0} +SSS +------------------------------ +[sig-storage] Downward API volume + should provide container's cpu request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:38:50.457: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-4650 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should provide container's cpu request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 27 15:38:50.670: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9bcc3ddd-7d25-4896-a96b-796ab3c5a0ab" in namespace "downward-api-4650" to be "Succeeded or Failed" +Oct 27 15:38:50.681: INFO: Pod "downwardapi-volume-9bcc3ddd-7d25-4896-a96b-796ab3c5a0ab": Phase="Pending", Reason="", readiness=false. Elapsed: 11.518585ms +Oct 27 15:38:52.694: INFO: Pod "downwardapi-volume-9bcc3ddd-7d25-4896-a96b-796ab3c5a0ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024331057s +Oct 27 15:38:54.708: INFO: Pod "downwardapi-volume-9bcc3ddd-7d25-4896-a96b-796ab3c5a0ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038103119s +STEP: Saw pod success +Oct 27 15:38:54.708: INFO: Pod "downwardapi-volume-9bcc3ddd-7d25-4896-a96b-796ab3c5a0ab" satisfied condition "Succeeded or Failed" +Oct 27 15:38:54.720: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod downwardapi-volume-9bcc3ddd-7d25-4896-a96b-796ab3c5a0ab container client-container: +STEP: delete the pod +Oct 27 15:38:54.758: INFO: Waiting for pod downwardapi-volume-9bcc3ddd-7d25-4896-a96b-796ab3c5a0ab to disappear +Oct 27 15:38:54.769: INFO: Pod downwardapi-volume-9bcc3ddd-7d25-4896-a96b-796ab3c5a0ab no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:38:54.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-4650" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":346,"completed":326,"skipped":5789,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be immutable if `immutable` field is set [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:38:54.803: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-5180 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be immutable if `immutable` field is set [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:38:55.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-5180" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","total":346,"completed":327,"skipped":5817,"failed":0} +SSSSSS +------------------------------ +[sig-node] Downward API + should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:38:55.124: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-9669 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward api env vars +Oct 27 15:38:55.336: INFO: Waiting up to 5m0s for pod "downward-api-62edc428-d6d7-4a1c-b961-1d99f2b6d746" in namespace "downward-api-9669" to be "Succeeded or Failed" +Oct 27 15:38:55.347: INFO: Pod "downward-api-62edc428-d6d7-4a1c-b961-1d99f2b6d746": Phase="Pending", Reason="", readiness=false. Elapsed: 11.392813ms +Oct 27 15:38:57.359: INFO: Pod "downward-api-62edc428-d6d7-4a1c-b961-1d99f2b6d746": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02374995s +Oct 27 15:38:59.373: INFO: Pod "downward-api-62edc428-d6d7-4a1c-b961-1d99f2b6d746": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03728474s +STEP: Saw pod success +Oct 27 15:38:59.373: INFO: Pod "downward-api-62edc428-d6d7-4a1c-b961-1d99f2b6d746" satisfied condition "Succeeded or Failed" +Oct 27 15:38:59.385: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod downward-api-62edc428-d6d7-4a1c-b961-1d99f2b6d746 container dapi-container: +STEP: delete the pod +Oct 27 15:38:59.422: INFO: Waiting for pod downward-api-62edc428-d6d7-4a1c-b961-1d99f2b6d746 to disappear +Oct 27 15:38:59.433: INFO: Pod downward-api-62edc428-d6d7-4a1c-b961-1d99f2b6d746 no longer exists +[AfterEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:38:59.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-9669" for this suite. +•{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":346,"completed":328,"skipped":5823,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] NoExecuteTaintManager Multiple Pods [Serial] + evicts pods with minTolerationSeconds [Disruptive] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:38:59.467: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename taint-multiple-pods +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in taint-multiple-pods-9139 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/taints.go:345 +Oct 27 15:38:59.655: INFO: Waiting up to 1m0s for all nodes to be ready +Oct 27 15:39:59.749: INFO: Waiting for terminating namespaces to be deleted... +[It] evicts pods with minTolerationSeconds [Disruptive] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:39:59.762: INFO: Starting informer... +STEP: Starting pods... +Oct 27 15:40:00.016: INFO: Pod1 is running on shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc. Tainting Node +Oct 27 15:40:02.082: INFO: Pod2 is running on shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc. Tainting Node +STEP: Trying to apply a taint on the Node +STEP: verifying the node has the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute +STEP: Waiting for Pod1 and Pod2 to be deleted +Oct 27 15:40:08.282: INFO: Noticed Pod "taint-eviction-b1" gets evicted. +Oct 27 15:40:27.887: INFO: Noticed Pod "taint-eviction-b2" gets evicted. +STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute +[AfterEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:40:27.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "taint-multiple-pods-9139" for this suite. +•{"msg":"PASSED [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]","total":346,"completed":329,"skipped":5882,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-node] Probing container + with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:40:27.951: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-probe +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-9056 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 +[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:41:28.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-9056" for this suite. +•{"msg":"PASSED [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":346,"completed":330,"skipped":5896,"failed":0} +SS +------------------------------ +[sig-scheduling] SchedulerPredicates [Serial] + validates resource limits of pods that are allowed to run [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:41:28.209: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename sched-pred +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-pred-5123 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 +Oct 27 15:41:28.404: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready +Oct 27 15:41:28.430: INFO: Waiting for terminating namespaces to be deleted... +Oct 27 15:41:28.442: INFO: +Logging pods the apiserver thinks is on node shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5 before test +Oct 27 15:41:28.468: INFO: addons-nginx-ingress-controller-d5756fc97-k8kst from kube-system started at 2021-10-27 14:37:29 +0000 UTC (1 container statuses recorded) +Oct 27 15:41:28.468: INFO: Container nginx-ingress-controller ready: true, restart count 0 +Oct 27 15:41:28.468: INFO: addons-nginx-ingress-nginx-ingress-k8s-backend-56d9d84c8c-vv84b from kube-system started at 2021-10-27 13:56:22 +0000 UTC (1 container statuses recorded) +Oct 27 15:41:28.468: INFO: Container nginx-ingress-nginx-ingress-k8s-backend ready: true, restart count 0 +Oct 27 15:41:28.468: INFO: apiserver-proxy-sl296 from kube-system started at 2021-10-27 13:56:02 +0000 UTC (2 container statuses recorded) +Oct 27 15:41:28.468: INFO: Container proxy ready: true, restart count 0 +Oct 27 15:41:28.468: INFO: Container sidecar ready: true, restart count 0 +Oct 27 15:41:28.468: INFO: calico-node-4h2tf from kube-system started at 2021-10-27 13:58:05 +0000 UTC (1 container statuses recorded) +Oct 27 15:41:28.468: INFO: Container calico-node ready: true, restart count 0 +Oct 27 15:41:28.468: INFO: calico-node-vertical-autoscaler-785b5f968-9qxv8 from kube-system started at 2021-10-27 13:56:22 +0000 UTC (1 container statuses recorded) +Oct 27 15:41:28.468: INFO: Container autoscaler ready: true, restart count 0 +Oct 27 15:41:28.468: INFO: calico-typha-horizontal-autoscaler-5b58bb446c-s7nwv from kube-system started at 2021-10-27 13:56:22 +0000 UTC (1 container statuses recorded) +Oct 27 15:41:28.468: INFO: Container autoscaler ready: true, restart count 0 +Oct 27 15:41:28.468: INFO: calico-typha-vertical-autoscaler-5c9655cddd-qxmpq from kube-system started at 2021-10-27 13:56:22 +0000 UTC (1 container statuses recorded) +Oct 27 15:41:28.468: INFO: Container autoscaler ready: true, restart count 0 +Oct 27 15:41:28.468: INFO: coredns-6944b5cf58-cqcmx from kube-system started at 2021-10-27 13:56:22 +0000 UTC (1 container statuses recorded) +Oct 27 15:41:28.468: INFO: Container coredns ready: true, restart count 0 +Oct 27 15:41:28.468: INFO: coredns-6944b5cf58-qwp9p from kube-system started at 2021-10-27 13:56:22 +0000 UTC (1 container statuses recorded) +Oct 27 15:41:28.468: INFO: Container coredns ready: true, restart count 0 +Oct 27 15:41:28.468: INFO: csi-driver-node-l4n7m from kube-system started at 2021-10-27 13:56:02 +0000 UTC (3 container statuses recorded) +Oct 27 15:41:28.468: INFO: Container csi-driver ready: true, restart count 0 +Oct 27 15:41:28.468: INFO: Container csi-liveness-probe ready: true, restart count 0 +Oct 27 15:41:28.468: INFO: Container csi-node-driver-registrar ready: true, restart count 0 +Oct 27 15:41:28.468: INFO: kube-proxy-4k6j5 from kube-system started at 2021-10-27 14:45:36 +0000 UTC (2 container statuses recorded) +Oct 27 15:41:28.468: INFO: Container conntrack-fix ready: true, restart count 0 +Oct 27 15:41:28.468: INFO: Container kube-proxy ready: true, restart count 0 +Oct 27 15:41:28.468: INFO: metrics-server-6b8fdcd747-t4xbj from kube-system started at 2021-10-27 13:56:22 +0000 UTC (1 container statuses recorded) +Oct 27 15:41:28.468: INFO: Container metrics-server ready: true, restart count 0 +Oct 27 15:41:28.468: INFO: node-exporter-cwjxv from kube-system started at 2021-10-27 13:56:02 +0000 UTC (1 container statuses recorded) +Oct 27 15:41:28.468: INFO: Container node-exporter ready: true, restart count 0 +Oct 27 15:41:28.468: INFO: node-problem-detector-g5rmr from kube-system started at 2021-10-27 14:24:37 +0000 UTC (1 container statuses recorded) +Oct 27 15:41:28.468: INFO: Container node-problem-detector ready: true, restart count 0 +Oct 27 15:41:28.468: INFO: vpn-shoot-77b49d5987-8ddn6 from kube-system started at 2021-10-27 13:56:22 +0000 UTC (1 container statuses recorded) +Oct 27 15:41:28.468: INFO: Container vpn-shoot ready: true, restart count 0 +Oct 27 15:41:28.468: INFO: dashboard-metrics-scraper-7ccbfc448f-l8nhq from kubernetes-dashboard started at 2021-10-27 13:56:22 +0000 UTC (1 container statuses recorded) +Oct 27 15:41:28.468: INFO: Container dashboard-metrics-scraper ready: true, restart count 0 +Oct 27 15:41:28.468: INFO: kubernetes-dashboard-7888b55b49-xptfd from kubernetes-dashboard started at 2021-10-27 13:56:22 +0000 UTC (1 container statuses recorded) +Oct 27 15:41:28.468: INFO: Container kubernetes-dashboard ready: true, restart count 2 +Oct 27 15:41:28.468: INFO: +Logging pods the apiserver thinks is on node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc before test +Oct 27 15:41:28.484: INFO: test-webserver-02ce52d7-aa32-4f94-bb26-4905e6d05b7a from container-probe-9056 started at 2021-10-27 15:40:28 +0000 UTC (1 container statuses recorded) +Oct 27 15:41:28.484: INFO: Container test-webserver ready: false, restart count 0 +Oct 27 15:41:28.484: INFO: apiserver-proxy-z9z6b from kube-system started at 2021-10-27 13:56:05 +0000 UTC (2 container statuses recorded) +Oct 27 15:41:28.484: INFO: Container proxy ready: true, restart count 0 +Oct 27 15:41:28.484: INFO: Container sidecar ready: true, restart count 0 +Oct 27 15:41:28.484: INFO: blackbox-exporter-65c549b94c-rjgf7 from kube-system started at 2021-10-27 14:03:35 +0000 UTC (1 container statuses recorded) +Oct 27 15:41:28.484: INFO: Container blackbox-exporter ready: true, restart count 0 +Oct 27 15:41:28.484: INFO: calico-kube-controllers-56bcbfb5c5-f9t75 from kube-system started at 2021-10-27 13:56:06 +0000 UTC (1 container statuses recorded) +Oct 27 15:41:28.484: INFO: Container calico-kube-controllers ready: true, restart count 0 +Oct 27 15:41:28.484: INFO: calico-node-7gp7f from kube-system started at 2021-10-27 13:56:05 +0000 UTC (1 container statuses recorded) +Oct 27 15:41:28.484: INFO: Container calico-node ready: true, restart count 0 +Oct 27 15:41:28.484: INFO: calico-typha-deploy-546b97d4b5-z8pql from kube-system started at 2021-10-27 13:56:06 +0000 UTC (1 container statuses recorded) +Oct 27 15:41:28.484: INFO: Container calico-typha ready: true, restart count 0 +Oct 27 15:41:28.484: INFO: csi-driver-node-4sm4p from kube-system started at 2021-10-27 13:56:05 +0000 UTC (3 container statuses recorded) +Oct 27 15:41:28.484: INFO: Container csi-driver ready: true, restart count 0 +Oct 27 15:41:28.484: INFO: Container csi-liveness-probe ready: true, restart count 0 +Oct 27 15:41:28.484: INFO: Container csi-node-driver-registrar ready: true, restart count 0 +Oct 27 15:41:28.484: INFO: kube-proxy-g7ktr from kube-system started at 2021-10-27 14:45:36 +0000 UTC (2 container statuses recorded) +Oct 27 15:41:28.484: INFO: Container conntrack-fix ready: true, restart count 0 +Oct 27 15:41:28.484: INFO: Container kube-proxy ready: true, restart count 0 +Oct 27 15:41:28.484: INFO: node-exporter-zsjq5 from kube-system started at 2021-10-27 13:56:05 +0000 UTC (1 container statuses recorded) +Oct 27 15:41:28.484: INFO: Container node-exporter ready: true, restart count 0 +Oct 27 15:41:28.484: INFO: node-problem-detector-9pkv8 from kube-system started at 2021-10-27 14:24:37 +0000 UTC (1 container statuses recorded) +Oct 27 15:41:28.484: INFO: Container node-problem-detector ready: true, restart count 0 +[It] validates resource limits of pods that are allowed to run [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: verifying the node has the label node shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5 +STEP: verifying the node has the label node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc +Oct 27 15:41:28.585: INFO: Pod test-webserver-02ce52d7-aa32-4f94-bb26-4905e6d05b7a requesting resource cpu=0m on Node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc +Oct 27 15:41:28.586: INFO: Pod addons-nginx-ingress-controller-d5756fc97-k8kst requesting resource cpu=100m on Node shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5 +Oct 27 15:41:28.586: INFO: Pod addons-nginx-ingress-nginx-ingress-k8s-backend-56d9d84c8c-vv84b requesting resource cpu=0m on Node shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5 +Oct 27 15:41:28.586: INFO: Pod apiserver-proxy-sl296 requesting resource cpu=40m on Node shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5 +Oct 27 15:41:28.586: INFO: Pod apiserver-proxy-z9z6b requesting resource cpu=40m on Node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc +Oct 27 15:41:28.586: INFO: Pod blackbox-exporter-65c549b94c-rjgf7 requesting resource cpu=11m on Node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc +Oct 27 15:41:28.586: INFO: Pod calico-kube-controllers-56bcbfb5c5-f9t75 requesting resource cpu=10m on Node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc +Oct 27 15:41:28.586: INFO: Pod calico-node-4h2tf requesting resource cpu=250m on Node shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5 +Oct 27 15:41:28.586: INFO: Pod calico-node-7gp7f requesting resource cpu=250m on Node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc +Oct 27 15:41:28.586: INFO: Pod calico-node-vertical-autoscaler-785b5f968-9qxv8 requesting resource cpu=10m on Node shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5 +Oct 27 15:41:28.586: INFO: Pod calico-typha-deploy-546b97d4b5-z8pql requesting resource cpu=200m on Node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc +Oct 27 15:41:28.586: INFO: Pod calico-typha-horizontal-autoscaler-5b58bb446c-s7nwv requesting resource cpu=10m on Node shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5 +Oct 27 15:41:28.586: INFO: Pod calico-typha-vertical-autoscaler-5c9655cddd-qxmpq requesting resource cpu=10m on Node shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5 +Oct 27 15:41:28.586: INFO: Pod coredns-6944b5cf58-cqcmx requesting resource cpu=50m on Node shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5 +Oct 27 15:41:28.586: INFO: Pod coredns-6944b5cf58-qwp9p requesting resource cpu=50m on Node shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5 +Oct 27 15:41:28.586: INFO: Pod csi-driver-node-4sm4p requesting resource cpu=40m on Node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc +Oct 27 15:41:28.586: INFO: Pod csi-driver-node-l4n7m requesting resource cpu=40m on Node shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5 +Oct 27 15:41:28.586: INFO: Pod kube-proxy-4k6j5 requesting resource cpu=34m on Node shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5 +Oct 27 15:41:28.586: INFO: Pod kube-proxy-g7ktr requesting resource cpu=34m on Node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc +Oct 27 15:41:28.586: INFO: Pod metrics-server-6b8fdcd747-t4xbj requesting resource cpu=50m on Node shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5 +Oct 27 15:41:28.586: INFO: Pod node-exporter-cwjxv requesting resource cpu=50m on Node shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5 +Oct 27 15:41:28.586: INFO: Pod node-exporter-zsjq5 requesting resource cpu=50m on Node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc +Oct 27 15:41:28.586: INFO: Pod node-problem-detector-9pkv8 requesting resource cpu=11m on Node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc +Oct 27 15:41:28.586: INFO: Pod node-problem-detector-g5rmr requesting resource cpu=11m on Node shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5 +Oct 27 15:41:28.586: INFO: Pod vpn-shoot-77b49d5987-8ddn6 requesting resource cpu=100m on Node shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5 +Oct 27 15:41:28.586: INFO: Pod dashboard-metrics-scraper-7ccbfc448f-l8nhq requesting resource cpu=0m on Node shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5 +Oct 27 15:41:28.586: INFO: Pod kubernetes-dashboard-7888b55b49-xptfd requesting resource cpu=50m on Node shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5 +STEP: Starting Pods to consume most of the cluster CPU. +Oct 27 15:41:28.586: INFO: Creating a pod which consumes cpu=745m on Node shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5 +Oct 27 15:41:28.607: INFO: Creating a pod which consumes cpu=891m on Node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc +STEP: Creating another pod that requires unavailable amount of CPU. +STEP: Considering event: +Type = [Normal], Name = [filler-pod-0ebbb753-2f12-48c0-92e6-ef1c7343daf0.16b1ed83771b0347], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5123/filler-pod-0ebbb753-2f12-48c0-92e6-ef1c7343daf0 to shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-0ebbb753-2f12-48c0-92e6-ef1c7343daf0.16b1ed83b846a1e0], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.5" already present on machine] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-0ebbb753-2f12-48c0-92e6-ef1c7343daf0.16b1ed83bdc5121c], Reason = [Created], Message = [Created container filler-pod-0ebbb753-2f12-48c0-92e6-ef1c7343daf0] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-0ebbb753-2f12-48c0-92e6-ef1c7343daf0.16b1ed83ca1df410], Reason = [Started], Message = [Started container filler-pod-0ebbb753-2f12-48c0-92e6-ef1c7343daf0] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-22ea3bb2-81b6-4f81-b645-5442db1865bd.16b1ed837831799b], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5123/filler-pod-22ea3bb2-81b6-4f81-b645-5442db1865bd to shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-22ea3bb2-81b6-4f81-b645-5442db1865bd.16b1ed83bda5593c], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.5" already present on machine] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-22ea3bb2-81b6-4f81-b645-5442db1865bd.16b1ed83c0f739b1], Reason = [Created], Message = [Created container filler-pod-22ea3bb2-81b6-4f81-b645-5442db1865bd] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-22ea3bb2-81b6-4f81-b645-5442db1865bd.16b1ed83c9fbf57a], Reason = [Started], Message = [Started container filler-pod-22ea3bb2-81b6-4f81-b645-5442db1865bd] +STEP: Considering event: +Type = [Warning], Name = [additional-pod.16b1ed846af53909], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.] +STEP: removing the label node off the node shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5 +STEP: verifying the node doesn't have the label node +STEP: removing the label node off the node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc +STEP: verifying the node doesn't have the label node +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:41:33.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-pred-5123" for this suite. +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 +•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":346,"completed":331,"skipped":5898,"failed":0} +SSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Update Demo + should scale a replication controller [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:41:33.996: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-9450 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[BeforeEach] Update Demo + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:296 +[It] should scale a replication controller [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a replication controller +Oct 27 15:41:34.183: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9450 create -f -' +Oct 27 15:41:34.626: INFO: stderr: "" +Oct 27 15:41:34.626: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" +STEP: waiting for all containers in name=update-demo pods to come up. +Oct 27 15:41:34.626: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9450 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 27 15:41:34.718: INFO: stderr: "" +Oct 27 15:41:34.718: INFO: stdout: "update-demo-nautilus-gp7b8 update-demo-nautilus-l46wr " +Oct 27 15:41:34.718: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9450 get pods update-demo-nautilus-gp7b8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 27 15:41:34.799: INFO: stderr: "" +Oct 27 15:41:34.799: INFO: stdout: "" +Oct 27 15:41:34.799: INFO: update-demo-nautilus-gp7b8 is created but not running +Oct 27 15:41:39.801: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9450 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 27 15:41:39.886: INFO: stderr: "" +Oct 27 15:41:39.886: INFO: stdout: "update-demo-nautilus-gp7b8 update-demo-nautilus-l46wr " +Oct 27 15:41:39.886: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9450 get pods update-demo-nautilus-gp7b8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 27 15:41:39.991: INFO: stderr: "" +Oct 27 15:41:39.991: INFO: stdout: "" +Oct 27 15:41:39.991: INFO: update-demo-nautilus-gp7b8 is created but not running +Oct 27 15:41:44.993: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9450 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 27 15:41:45.083: INFO: stderr: "" +Oct 27 15:41:45.083: INFO: stdout: "update-demo-nautilus-gp7b8 update-demo-nautilus-l46wr " +Oct 27 15:41:45.083: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9450 get pods update-demo-nautilus-gp7b8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 27 15:41:45.166: INFO: stderr: "" +Oct 27 15:41:45.166: INFO: stdout: "true" +Oct 27 15:41:45.166: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9450 get pods update-demo-nautilus-gp7b8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Oct 27 15:41:45.257: INFO: stderr: "" +Oct 27 15:41:45.257: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" +Oct 27 15:41:45.257: INFO: validating pod update-demo-nautilus-gp7b8 +Oct 27 15:41:45.334: INFO: got data: { + "image": "nautilus.jpg" +} + +Oct 27 15:41:45.334: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Oct 27 15:41:45.334: INFO: update-demo-nautilus-gp7b8 is verified up and running +Oct 27 15:41:45.334: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9450 get pods update-demo-nautilus-l46wr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 27 15:41:45.420: INFO: stderr: "" +Oct 27 15:41:45.420: INFO: stdout: "true" +Oct 27 15:41:45.421: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9450 get pods update-demo-nautilus-l46wr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Oct 27 15:41:45.503: INFO: stderr: "" +Oct 27 15:41:45.503: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" +Oct 27 15:41:45.503: INFO: validating pod update-demo-nautilus-l46wr +Oct 27 15:41:45.570: INFO: got data: { + "image": "nautilus.jpg" +} + +Oct 27 15:41:45.570: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Oct 27 15:41:45.570: INFO: update-demo-nautilus-l46wr is verified up and running +STEP: scaling down the replication controller +Oct 27 15:41:45.572: INFO: scanned /root for discovery docs: +Oct 27 15:41:45.572: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9450 scale rc update-demo-nautilus --replicas=1 --timeout=5m' +Oct 27 15:41:45.684: INFO: stderr: "" +Oct 27 15:41:45.684: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" +STEP: waiting for all containers in name=update-demo pods to come up. +Oct 27 15:41:45.684: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9450 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 27 15:41:45.778: INFO: stderr: "" +Oct 27 15:41:45.778: INFO: stdout: "update-demo-nautilus-gp7b8 update-demo-nautilus-l46wr " +STEP: Replicas for name=update-demo: expected=1 actual=2 +Oct 27 15:41:50.781: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9450 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 27 15:41:50.869: INFO: stderr: "" +Oct 27 15:41:50.869: INFO: stdout: "update-demo-nautilus-gp7b8 " +Oct 27 15:41:50.870: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9450 get pods update-demo-nautilus-gp7b8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 27 15:41:50.951: INFO: stderr: "" +Oct 27 15:41:50.951: INFO: stdout: "true" +Oct 27 15:41:50.951: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9450 get pods update-demo-nautilus-gp7b8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Oct 27 15:41:51.034: INFO: stderr: "" +Oct 27 15:41:51.034: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" +Oct 27 15:41:51.034: INFO: validating pod update-demo-nautilus-gp7b8 +Oct 27 15:41:51.053: INFO: got data: { + "image": "nautilus.jpg" +} + +Oct 27 15:41:51.053: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Oct 27 15:41:51.053: INFO: update-demo-nautilus-gp7b8 is verified up and running +STEP: scaling up the replication controller +Oct 27 15:41:51.054: INFO: scanned /root for discovery docs: +Oct 27 15:41:51.054: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9450 scale rc update-demo-nautilus --replicas=2 --timeout=5m' +Oct 27 15:41:52.181: INFO: stderr: "" +Oct 27 15:41:52.181: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" +STEP: waiting for all containers in name=update-demo pods to come up. +Oct 27 15:41:52.181: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9450 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 27 15:41:52.273: INFO: stderr: "" +Oct 27 15:41:52.273: INFO: stdout: "update-demo-nautilus-9fpqq update-demo-nautilus-gp7b8 " +Oct 27 15:41:52.273: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9450 get pods update-demo-nautilus-9fpqq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 27 15:41:52.360: INFO: stderr: "" +Oct 27 15:41:52.360: INFO: stdout: "" +Oct 27 15:41:52.360: INFO: update-demo-nautilus-9fpqq is created but not running +Oct 27 15:41:57.361: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9450 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 27 15:41:57.456: INFO: stderr: "" +Oct 27 15:41:57.456: INFO: stdout: "update-demo-nautilus-9fpqq update-demo-nautilus-gp7b8 " +Oct 27 15:41:57.456: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9450 get pods update-demo-nautilus-9fpqq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 27 15:41:57.551: INFO: stderr: "" +Oct 27 15:41:57.551: INFO: stdout: "true" +Oct 27 15:41:57.551: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9450 get pods update-demo-nautilus-9fpqq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Oct 27 15:41:57.643: INFO: stderr: "" +Oct 27 15:41:57.643: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" +Oct 27 15:41:57.643: INFO: validating pod update-demo-nautilus-9fpqq +Oct 27 15:41:57.706: INFO: got data: { + "image": "nautilus.jpg" +} + +Oct 27 15:41:57.706: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Oct 27 15:41:57.706: INFO: update-demo-nautilus-9fpqq is verified up and running +Oct 27 15:41:57.706: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9450 get pods update-demo-nautilus-gp7b8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 27 15:41:57.801: INFO: stderr: "" +Oct 27 15:41:57.801: INFO: stdout: "true" +Oct 27 15:41:57.801: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9450 get pods update-demo-nautilus-gp7b8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Oct 27 15:41:57.889: INFO: stderr: "" +Oct 27 15:41:57.889: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" +Oct 27 15:41:57.889: INFO: validating pod update-demo-nautilus-gp7b8 +Oct 27 15:41:57.904: INFO: got data: { + "image": "nautilus.jpg" +} + +Oct 27 15:41:57.904: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Oct 27 15:41:57.904: INFO: update-demo-nautilus-gp7b8 is verified up and running +STEP: using delete to clean up resources +Oct 27 15:41:57.904: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9450 delete --grace-period=0 --force -f -' +Oct 27 15:41:58.008: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Oct 27 15:41:58.008: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" +Oct 27 15:41:58.008: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9450 get rc,svc -l name=update-demo --no-headers' +Oct 27 15:41:58.113: INFO: stderr: "No resources found in kubectl-9450 namespace.\n" +Oct 27 15:41:58.113: INFO: stdout: "" +Oct 27 15:41:58.114: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9450 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' +Oct 27 15:41:58.208: INFO: stderr: "" +Oct 27 15:41:58.208: INFO: stdout: "" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:41:58.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-9450" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":346,"completed":332,"skipped":5907,"failed":0} +SSSSSSSSS +------------------------------ +[sig-apps] ReplicaSet + should list and delete a collection of ReplicaSets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:41:58.243: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename replicaset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replicaset-5081 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should list and delete a collection of ReplicaSets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Create a ReplicaSet +STEP: Verify that the required pods have come up +Oct 27 15:41:58.458: INFO: Pod name sample-pod: Found 0 pods out of 3 +Oct 27 15:42:03.535: INFO: Pod name sample-pod: Found 3 pods out of 3 +STEP: ensuring each pod is running +Oct 27 15:42:03.546: INFO: Replica Status: {Replicas:3 FullyLabeledReplicas:3 ReadyReplicas:3 AvailableReplicas:3 ObservedGeneration:1 Conditions:[]} +STEP: Listing all ReplicaSets +STEP: DeleteCollection of the ReplicaSets +STEP: After DeleteCollection verify that ReplicaSets have been deleted +[AfterEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:42:03.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replicaset-5081" for this suite. +•{"msg":"PASSED [sig-apps] ReplicaSet should list and delete a collection of ReplicaSets [Conformance]","total":346,"completed":333,"skipped":5916,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should update annotations on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:42:03.769: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-911 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should update annotations on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating the pod +Oct 27 15:42:04.060: INFO: The status of Pod annotationupdatef576bf87-c8f7-45a4-b15d-6aeb6c7eb6f4 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:42:06.073: INFO: The status of Pod annotationupdatef576bf87-c8f7-45a4-b15d-6aeb6c7eb6f4 is Running (Ready = true) +Oct 27 15:42:06.632: INFO: Successfully updated pod "annotationupdatef576bf87-c8f7-45a4-b15d-6aeb6c7eb6f4" +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:42:10.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-911" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":346,"completed":334,"skipped":5955,"failed":0} +SSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for CRD without validation schema [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:42:10.734: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-2699 +STEP: Waiting for a default service account to be provisioned in namespace +[It] works for CRD without validation schema [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:42:10.930: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: client-side validation (kubectl create and apply) allows request with any unknown properties +Oct 27 15:42:15.080: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-2699 --namespace=crd-publish-openapi-2699 create -f -' +Oct 27 15:42:15.960: INFO: stderr: "" +Oct 27 15:42:15.960: INFO: stdout: "e2e-test-crd-publish-openapi-2830-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" +Oct 27 15:42:15.960: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-2699 --namespace=crd-publish-openapi-2699 delete e2e-test-crd-publish-openapi-2830-crds test-cr' +Oct 27 15:42:16.072: INFO: stderr: "" +Oct 27 15:42:16.072: INFO: stdout: "e2e-test-crd-publish-openapi-2830-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" +Oct 27 15:42:16.072: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-2699 --namespace=crd-publish-openapi-2699 apply -f -' +Oct 27 15:42:16.282: INFO: stderr: "" +Oct 27 15:42:16.282: INFO: stdout: "e2e-test-crd-publish-openapi-2830-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" +Oct 27 15:42:16.282: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-2699 --namespace=crd-publish-openapi-2699 delete e2e-test-crd-publish-openapi-2830-crds test-cr' +Oct 27 15:42:16.379: INFO: stderr: "" +Oct 27 15:42:16.379: INFO: stdout: "e2e-test-crd-publish-openapi-2830-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" +STEP: kubectl explain works to explain CR without validation schema +Oct 27 15:42:16.379: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-2699 explain e2e-test-crd-publish-openapi-2830-crds' +Oct 27 15:42:16.551: INFO: stderr: "" +Oct 27 15:42:16.551: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-2830-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:42:20.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-2699" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":346,"completed":335,"skipped":5960,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Update Demo + should create and stop a replication controller [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:42:20.805: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-9526 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[BeforeEach] Update Demo + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:296 +[It] should create and stop a replication controller [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a replication controller +Oct 27 15:42:21.162: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9526 create -f -' +Oct 27 15:42:21.467: INFO: stderr: "" +Oct 27 15:42:21.467: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" +STEP: waiting for all containers in name=update-demo pods to come up. +Oct 27 15:42:21.467: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9526 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 27 15:42:21.560: INFO: stderr: "" +Oct 27 15:42:21.560: INFO: stdout: "update-demo-nautilus-76p87 update-demo-nautilus-s6v27 " +Oct 27 15:42:21.560: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9526 get pods update-demo-nautilus-76p87 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 27 15:42:21.644: INFO: stderr: "" +Oct 27 15:42:21.644: INFO: stdout: "" +Oct 27 15:42:21.644: INFO: update-demo-nautilus-76p87 is created but not running +Oct 27 15:42:26.646: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9526 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 27 15:42:26.735: INFO: stderr: "" +Oct 27 15:42:26.735: INFO: stdout: "update-demo-nautilus-76p87 update-demo-nautilus-s6v27 " +Oct 27 15:42:26.735: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9526 get pods update-demo-nautilus-76p87 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 27 15:42:26.816: INFO: stderr: "" +Oct 27 15:42:26.816: INFO: stdout: "true" +Oct 27 15:42:26.816: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9526 get pods update-demo-nautilus-76p87 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Oct 27 15:42:26.896: INFO: stderr: "" +Oct 27 15:42:26.896: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" +Oct 27 15:42:26.896: INFO: validating pod update-demo-nautilus-76p87 +Oct 27 15:42:26.962: INFO: got data: { + "image": "nautilus.jpg" +} + +Oct 27 15:42:26.962: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Oct 27 15:42:26.962: INFO: update-demo-nautilus-76p87 is verified up and running +Oct 27 15:42:26.962: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9526 get pods update-demo-nautilus-s6v27 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 27 15:42:27.048: INFO: stderr: "" +Oct 27 15:42:27.048: INFO: stdout: "true" +Oct 27 15:42:27.049: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9526 get pods update-demo-nautilus-s6v27 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Oct 27 15:42:27.135: INFO: stderr: "" +Oct 27 15:42:27.135: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" +Oct 27 15:42:27.135: INFO: validating pod update-demo-nautilus-s6v27 +Oct 27 15:42:27.160: INFO: got data: { + "image": "nautilus.jpg" +} + +Oct 27 15:42:27.161: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Oct 27 15:42:27.161: INFO: update-demo-nautilus-s6v27 is verified up and running +STEP: using delete to clean up resources +Oct 27 15:42:27.161: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9526 delete --grace-period=0 --force -f -' +Oct 27 15:42:27.257: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Oct 27 15:42:27.257: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" +Oct 27 15:42:27.257: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9526 get rc,svc -l name=update-demo --no-headers' +Oct 27 15:42:27.353: INFO: stderr: "No resources found in kubectl-9526 namespace.\n" +Oct 27 15:42:27.353: INFO: stdout: "" +Oct 27 15:42:27.353: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9526 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' +Oct 27 15:42:27.447: INFO: stderr: "" +Oct 27 15:42:27.447: INFO: stdout: "" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:42:27.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-9526" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":346,"completed":336,"skipped":5972,"failed":0} +SS +------------------------------ +[sig-node] Security Context When creating a container with runAsUser + should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:42:27.481: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename security-context-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-test-1442 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 +[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:42:27.685: INFO: Waiting up to 5m0s for pod "busybox-user-65534-926ca2fa-6f86-434f-b8c3-7ea0c3eaa8a1" in namespace "security-context-test-1442" to be "Succeeded or Failed" +Oct 27 15:42:27.696: INFO: Pod "busybox-user-65534-926ca2fa-6f86-434f-b8c3-7ea0c3eaa8a1": Phase="Pending", Reason="", readiness=false. Elapsed: 11.452909ms +Oct 27 15:42:29.714: INFO: Pod "busybox-user-65534-926ca2fa-6f86-434f-b8c3-7ea0c3eaa8a1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029340854s +Oct 27 15:42:31.727: INFO: Pod "busybox-user-65534-926ca2fa-6f86-434f-b8c3-7ea0c3eaa8a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041775055s +Oct 27 15:42:31.727: INFO: Pod "busybox-user-65534-926ca2fa-6f86-434f-b8c3-7ea0c3eaa8a1" satisfied condition "Succeeded or Failed" +[AfterEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:42:31.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "security-context-test-1442" for this suite. +•{"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":337,"skipped":5974,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Pods + should get a host IP [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:42:31.762: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename pods +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-5775 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:188 +[It] should get a host IP [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating pod +Oct 27 15:42:31.983: INFO: The status of Pod pod-hostip-40efb6bb-ac7b-4a31-af4a-374deb729792 is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:42:33.995: INFO: The status of Pod pod-hostip-40efb6bb-ac7b-4a31-af4a-374deb729792 is Running (Ready = true) +Oct 27 15:42:34.018: INFO: Pod pod-hostip-40efb6bb-ac7b-4a31-af4a-374deb729792 has hostIP: 10.250.0.3 +[AfterEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:42:34.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-5775" for this suite. +•{"msg":"PASSED [sig-node] Pods should get a host IP [NodeConformance] [Conformance]","total":346,"completed":338,"skipped":6010,"failed":0} +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:42:34.051: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-6910 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name projected-configmap-test-volume-677325ce-2307-49c8-bc8b-2298f47a38ac +STEP: Creating a pod to test consume configMaps +Oct 27 15:42:34.271: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-de384956-4257-4106-921f-d73c1c208ea2" in namespace "projected-6910" to be "Succeeded or Failed" +Oct 27 15:42:34.282: INFO: Pod "pod-projected-configmaps-de384956-4257-4106-921f-d73c1c208ea2": Phase="Pending", Reason="", readiness=false. Elapsed: 11.408301ms +Oct 27 15:42:36.294: INFO: Pod "pod-projected-configmaps-de384956-4257-4106-921f-d73c1c208ea2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023584951s +Oct 27 15:42:38.307: INFO: Pod "pod-projected-configmaps-de384956-4257-4106-921f-d73c1c208ea2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035731804s +STEP: Saw pod success +Oct 27 15:42:38.307: INFO: Pod "pod-projected-configmaps-de384956-4257-4106-921f-d73c1c208ea2" satisfied condition "Succeeded or Failed" +Oct 27 15:42:38.318: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc pod pod-projected-configmaps-de384956-4257-4106-921f-d73c1c208ea2 container projected-configmap-volume-test: +STEP: delete the pod +Oct 27 15:42:38.396: INFO: Waiting for pod pod-projected-configmaps-de384956-4257-4106-921f-d73c1c208ea2 to disappear +Oct 27 15:42:38.407: INFO: Pod pod-projected-configmaps-de384956-4257-4106-921f-d73c1c208ea2 no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:42:38.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-6910" for this suite. +•{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":346,"completed":339,"skipped":6027,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-api-machinery] server version + should find the server version [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] server version + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:42:38.440: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename server-version +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in server-version-9356 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should find the server version [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Request ServerVersion +STEP: Confirm major version +Oct 27 15:42:38.638: INFO: Major version: 1 +STEP: Confirm minor version +Oct 27 15:42:38.638: INFO: cleanMinorVersion: 22 +Oct 27 15:42:38.638: INFO: Minor version: 22 +[AfterEach] [sig-api-machinery] server version + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:42:38.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "server-version-9356" for this suite. +•{"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":346,"completed":340,"skipped":6037,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Deployment + deployment should support proportional scaling [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:42:38.664: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename deployment +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in deployment-9709 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 +[It] deployment should support proportional scaling [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:42:38.851: INFO: Creating deployment "webserver-deployment" +Oct 27 15:42:38.863: INFO: Waiting for observed generation 1 +Oct 27 15:42:40.886: INFO: Waiting for all required pods to come up +Oct 27 15:42:40.906: INFO: Pod name httpd: Found 10 pods out of 10 +STEP: ensuring each pod is running +Oct 27 15:42:46.941: INFO: Waiting for deployment "webserver-deployment" to complete +Oct 27 15:42:46.964: INFO: Updating deployment "webserver-deployment" with a non-existent image +Oct 27 15:42:46.988: INFO: Updating deployment webserver-deployment +Oct 27 15:42:46.988: INFO: Waiting for observed generation 2 +Oct 27 15:42:49.011: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 +Oct 27 15:42:49.022: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 +Oct 27 15:42:49.033: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas +Oct 27 15:42:49.067: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 +Oct 27 15:42:49.067: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 +Oct 27 15:42:49.078: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas +Oct 27 15:42:49.117: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas +Oct 27 15:42:49.117: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 +Oct 27 15:42:49.141: INFO: Updating deployment webserver-deployment +Oct 27 15:42:49.141: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas +Oct 27 15:42:49.166: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 +Oct 27 15:42:51.241: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 +Oct 27 15:42:51.263: INFO: Deployment "webserver-deployment": +&Deployment{ObjectMeta:{webserver-deployment deployment-9709 8d085858-684a-460e-b6c0-9ea62afae353 45709 3 2021-10-27 15:42:38 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-10-27 15:42:38 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 15:42:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00466f888 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2021-10-27 15:42:49 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-795d758f88" is progressing.,LastUpdateTime:2021-10-27 15:42:49 +0000 UTC,LastTransitionTime:2021-10-27 15:42:38 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} + +Oct 27 15:42:51.275: INFO: New ReplicaSet "webserver-deployment-795d758f88" of Deployment "webserver-deployment": +&ReplicaSet{ObjectMeta:{webserver-deployment-795d758f88 deployment-9709 4062d88d-1887-4f7b-9372-eb8ce1e3a8a1 45700 3 2021-10-27 15:42:46 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 8d085858-684a-460e-b6c0-9ea62afae353 0xc00466fc87 0xc00466fc88}] [] [{kube-controller-manager Update apps/v1 2021-10-27 15:42:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8d085858-684a-460e-b6c0-9ea62afae353\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 15:42:47 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 795d758f88,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00466fd28 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Oct 27 15:42:51.275: INFO: All old ReplicaSets of Deployment "webserver-deployment": +Oct 27 15:42:51.275: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-847dcfb7fb deployment-9709 c0b26056-f3f1-4d17-aa8f-42a4f749e628 45701 3 2021-10-27 15:42:38 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 8d085858-684a-460e-b6c0-9ea62afae353 0xc00466fd87 0xc00466fd88}] [] [{kube-controller-manager Update apps/v1 2021-10-27 15:42:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8d085858-684a-460e-b6c0-9ea62afae353\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-27 15:42:43 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 847dcfb7fb,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00466fe18 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} +Oct 27 15:42:51.307: INFO: Pod "webserver-deployment-795d758f88-5nmrf" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-5nmrf webserver-deployment-795d758f88- deployment-9709 8b0f8483-74d8-41dd-9d1e-6a140f9fa8bc 45726 0 2021-10-27 15:42:49 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 4062d88d-1887-4f7b-9372-eb8ce1e3a8a1 0xc008ccd617 0xc008ccd618}] [] [{kube-controller-manager Update v1 2021-10-27 15:42:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4062d88d-1887-4f7b-9372-eb8ce1e3a8a1\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:42:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-jg87l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmtq6-2hp.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jg87l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.3,PodIP:,StartTime:2021-10-27 15:42:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:42:51.308: INFO: Pod "webserver-deployment-795d758f88-67zm4" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-67zm4 webserver-deployment-795d758f88- deployment-9709 88705a29-8097-47a4-bd99-9dfb767e9e7d 45723 0 2021-10-27 15:42:49 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 4062d88d-1887-4f7b-9372-eb8ce1e3a8a1 0xc008ccd7f0 0xc008ccd7f1}] [] [{kube-controller-manager Update v1 2021-10-27 15:42:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4062d88d-1887-4f7b-9372-eb8ce1e3a8a1\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:42:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-shqtv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmtq6-2hp.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-shqtv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.2,PodIP:,StartTime:2021-10-27 15:42:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:42:51.308: INFO: Pod "webserver-deployment-795d758f88-7njp7" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-7njp7 webserver-deployment-795d758f88- deployment-9709 d069db1d-22ca-4c65-9e90-ab13da53de51 45662 0 2021-10-27 15:42:49 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 4062d88d-1887-4f7b-9372-eb8ce1e3a8a1 0xc008ccd9c0 0xc008ccd9c1}] [] [{kube-controller-manager Update v1 2021-10-27 15:42:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4062d88d-1887-4f7b-9372-eb8ce1e3a8a1\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:42:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-88dh5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmtq6-2hp.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-88dh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.2,PodIP:,StartTime:2021-10-27 15:42:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:42:51.308: INFO: Pod "webserver-deployment-795d758f88-84f47" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-84f47 webserver-deployment-795d758f88- deployment-9709 913bc25e-a0ae-4af2-b69d-06727cab2e21 45717 0 2021-10-27 15:42:46 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[cni.projectcalico.org/containerID:f0e5c82497f100a4b6bfea2b9b8b54f482b2c1efbb68441cdd256802b399a7ca cni.projectcalico.org/podIP:100.96.1.112/32 cni.projectcalico.org/podIPs:100.96.1.112/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 4062d88d-1887-4f7b-9372-eb8ce1e3a8a1 0xc008ccdbb0 0xc008ccdbb1}] [] [{kube-controller-manager Update v1 2021-10-27 15:42:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4062d88d-1887-4f7b-9372-eb8ce1e3a8a1\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:42:47 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2021-10-27 15:42:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-t99sk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmtq6-2hp.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t99sk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.3,PodIP:,StartTime:2021-10-27 15:42:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:42:51.308: INFO: Pod "webserver-deployment-795d758f88-8vpfj" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-8vpfj webserver-deployment-795d758f88- deployment-9709 7c79f73d-b05a-4260-8ad0-f6023da311bb 45643 0 2021-10-27 15:42:47 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[cni.projectcalico.org/containerID:d4772b39aba199d85cfb34f02952a3a9fc2dc12602c82c31c9c394e86e5c3e19 cni.projectcalico.org/podIP:100.96.0.125/32 cni.projectcalico.org/podIPs:100.96.0.125/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 4062d88d-1887-4f7b-9372-eb8ce1e3a8a1 0xc008ccddc0 0xc008ccddc1}] [] [{kube-controller-manager Update v1 2021-10-27 15:42:47 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4062d88d-1887-4f7b-9372-eb8ce1e3a8a1\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:42:47 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2021-10-27 15:42:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-kzw7w,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmtq6-2hp.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kzw7w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.2,PodIP:,StartTime:2021-10-27 15:42:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:42:51.309: INFO: Pod "webserver-deployment-795d758f88-b4hlg" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-b4hlg webserver-deployment-795d758f88- deployment-9709 b5b2e236-b374-4479-bbe2-4a9d0b0b5dc1 45699 0 2021-10-27 15:42:49 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 4062d88d-1887-4f7b-9372-eb8ce1e3a8a1 0xc008ccdfb0 0xc008ccdfb1}] [] [{kube-controller-manager Update v1 2021-10-27 15:42:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4062d88d-1887-4f7b-9372-eb8ce1e3a8a1\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:42:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-5rjzr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmtq6-2hp.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5rjzr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.2,PodIP:,StartTime:2021-10-27 15:42:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:42:51.309: INFO: Pod "webserver-deployment-795d758f88-dlcrh" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-dlcrh webserver-deployment-795d758f88- deployment-9709 b4002872-5f31-4776-949c-4c343b4e2784 45714 0 2021-10-27 15:42:47 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[cni.projectcalico.org/containerID:19fbdac669d32703324ba605e0f1708f1917f1cad8b4e6ec1f62e6d11af404d2 cni.projectcalico.org/podIP:100.96.1.113/32 cni.projectcalico.org/podIPs:100.96.1.113/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 4062d88d-1887-4f7b-9372-eb8ce1e3a8a1 0xc004a941a0 0xc004a941a1}] [] [{kube-controller-manager Update v1 2021-10-27 15:42:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4062d88d-1887-4f7b-9372-eb8ce1e3a8a1\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:42:47 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2021-10-27 15:42:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-lwbhp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmtq6-2hp.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lwbhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.3,PodIP:,StartTime:2021-10-27 15:42:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:42:51.309: INFO: Pod "webserver-deployment-795d758f88-fq7nb" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-fq7nb webserver-deployment-795d758f88- deployment-9709 d91f6431-18bf-4370-91ad-8fe34566b4ce 45721 0 2021-10-27 15:42:49 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 4062d88d-1887-4f7b-9372-eb8ce1e3a8a1 0xc004a943a0 0xc004a943a1}] [] [{kube-controller-manager Update v1 2021-10-27 15:42:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4062d88d-1887-4f7b-9372-eb8ce1e3a8a1\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:42:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-42tr7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmtq6-2hp.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-42tr7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.2,PodIP:,StartTime:2021-10-27 15:42:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:42:51.309: INFO: Pod "webserver-deployment-795d758f88-hkw5x" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-hkw5x webserver-deployment-795d758f88- deployment-9709 d918b286-403c-4e55-9799-8cc1476bc58a 45707 0 2021-10-27 15:42:49 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 4062d88d-1887-4f7b-9372-eb8ce1e3a8a1 0xc004a94570 0xc004a94571}] [] [{kube-controller-manager Update v1 2021-10-27 15:42:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4062d88d-1887-4f7b-9372-eb8ce1e3a8a1\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:42:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-tkwhp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmtq6-2hp.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tkwhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.3,PodIP:,StartTime:2021-10-27 15:42:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:42:51.310: INFO: Pod "webserver-deployment-795d758f88-hwfpf" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-hwfpf webserver-deployment-795d758f88- deployment-9709 c1db0330-d219-4fb4-a8bf-f43327c84f2b 45708 0 2021-10-27 15:42:47 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[cni.projectcalico.org/containerID:56a753fb18e0379e060649fe30e04b98d454db784c068d66a92d2eddc6f9bb47 cni.projectcalico.org/podIP:100.96.1.111/32 cni.projectcalico.org/podIPs:100.96.1.111/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 4062d88d-1887-4f7b-9372-eb8ce1e3a8a1 0xc004a94760 0xc004a94761}] [] [{kube-controller-manager Update v1 2021-10-27 15:42:47 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4062d88d-1887-4f7b-9372-eb8ce1e3a8a1\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:42:47 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2021-10-27 15:42:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-pvwrq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmtq6-2hp.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pvwrq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.3,PodIP:,StartTime:2021-10-27 15:42:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:42:51.310: INFO: Pod "webserver-deployment-795d758f88-kvx5s" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-kvx5s webserver-deployment-795d758f88- deployment-9709 c91d4e04-ba9c-4840-88d6-7e41688675b3 45703 0 2021-10-27 15:42:49 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 4062d88d-1887-4f7b-9372-eb8ce1e3a8a1 0xc004a94950 0xc004a94951}] [] [{kube-controller-manager Update v1 2021-10-27 15:42:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4062d88d-1887-4f7b-9372-eb8ce1e3a8a1\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:42:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-p4m69,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmtq6-2hp.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p4m69,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.3,PodIP:,StartTime:2021-10-27 15:42:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:42:51.310: INFO: Pod "webserver-deployment-795d758f88-lxzjv" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-lxzjv webserver-deployment-795d758f88- deployment-9709 09a4ab34-e193-44b7-bd59-1487f6045da3 45642 0 2021-10-27 15:42:47 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[cni.projectcalico.org/containerID:2d0c2e2bab87ca952ddeeef5ca2496659dd746824bcb54fc212c3cc45a9efbba cni.projectcalico.org/podIP:100.96.0.124/32 cni.projectcalico.org/podIPs:100.96.0.124/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 4062d88d-1887-4f7b-9372-eb8ce1e3a8a1 0xc004a94b40 0xc004a94b41}] [] [{kube-controller-manager Update v1 2021-10-27 15:42:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4062d88d-1887-4f7b-9372-eb8ce1e3a8a1\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:42:47 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2021-10-27 15:42:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-fvt4v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmtq6-2hp.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fvt4v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.2,PodIP:,StartTime:2021-10-27 15:42:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:42:51.310: INFO: Pod "webserver-deployment-795d758f88-vp87n" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-vp87n webserver-deployment-795d758f88- deployment-9709 1f3f4970-aa45-4cd7-bf4c-1db5980be4ce 45711 0 2021-10-27 15:42:49 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 4062d88d-1887-4f7b-9372-eb8ce1e3a8a1 0xc004a94d30 0xc004a94d31}] [] [{kube-controller-manager Update v1 2021-10-27 15:42:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4062d88d-1887-4f7b-9372-eb8ce1e3a8a1\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:42:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-gprrh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmtq6-2hp.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gprrh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.2,PodIP:,StartTime:2021-10-27 15:42:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:42:51.311: INFO: Pod "webserver-deployment-847dcfb7fb-4l7bq" is available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-4l7bq webserver-deployment-847dcfb7fb- deployment-9709 da012da7-4077-4632-bada-42d64f87ddb8 45536 0 2021-10-27 15:42:38 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/containerID:554c250fc50c6fcdd3829c188b4550f79a6bc764ba7c4f29d7db4923a8a2746c cni.projectcalico.org/podIP:100.96.0.123/32 cni.projectcalico.org/podIPs:100.96.0.123/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb c0b26056-f3f1-4d17-aa8f-42a4f749e628 0xc004a94f60 0xc004a94f61}] [] [{kube-controller-manager Update v1 2021-10-27 15:42:38 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c0b26056-f3f1-4d17-aa8f-42a4f749e628\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-27 15:42:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-27 15:42:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.0.123\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-2x7lq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmtq6-2hp.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2x7lq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.2,PodIP:100.96.0.123,StartTime:2021-10-27 15:42:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-27 15:42:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://09a8c1ad6778dfc1cad00eaef7a14f66930c5eb319e9675469b0cf5b2e116a71,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.0.123,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:42:51.311: INFO: Pod "webserver-deployment-847dcfb7fb-5ch2t" is not available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-5ch2t webserver-deployment-847dcfb7fb- deployment-9709 61565d51-94dd-4971-83ed-bb44527efe59 45725 0 2021-10-27 15:42:49 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb c0b26056-f3f1-4d17-aa8f-42a4f749e628 0xc004a95170 0xc004a95171}] [] [{kube-controller-manager Update v1 2021-10-27 15:42:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c0b26056-f3f1-4d17-aa8f-42a4f749e628\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:42:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-nrnv7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmtq6-2hp.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nrnv7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.2,PodIP:,StartTime:2021-10-27 15:42:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:42:51.311: INFO: Pod "webserver-deployment-847dcfb7fb-5zcn7" is available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-5zcn7 webserver-deployment-847dcfb7fb- deployment-9709 ea0341ee-4797-4a43-9559-1445becf0cd2 45533 0 2021-10-27 15:42:38 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/containerID:1615b52a172c8c8fb6d4c9b6624be4845bd8a4c575afb2ae2a55cd4c759c4467 cni.projectcalico.org/podIP:100.96.0.121/32 cni.projectcalico.org/podIPs:100.96.0.121/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb c0b26056-f3f1-4d17-aa8f-42a4f749e628 0xc004a95350 0xc004a95351}] [] [{kube-controller-manager Update v1 2021-10-27 15:42:38 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c0b26056-f3f1-4d17-aa8f-42a4f749e628\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-27 15:42:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-27 15:42:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.0.121\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-lnn7m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmtq6-2hp.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lnn7m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.2,PodIP:100.96.0.121,StartTime:2021-10-27 15:42:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-27 15:42:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://58294037047c021ff99d086acb18a331fee476b072112117501c5d6ad691f59b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.0.121,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:42:51.312: INFO: Pod "webserver-deployment-847dcfb7fb-66gts" is not available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-66gts webserver-deployment-847dcfb7fb- deployment-9709 33551311-3d2e-49aa-b420-a9fb559b60f6 45730 0 2021-10-27 15:42:49 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb c0b26056-f3f1-4d17-aa8f-42a4f749e628 0xc004a95560 0xc004a95561}] [] [{kube-controller-manager Update v1 2021-10-27 15:42:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c0b26056-f3f1-4d17-aa8f-42a4f749e628\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:42:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-5s6br,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmtq6-2hp.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5s6br,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.3,PodIP:,StartTime:2021-10-27 15:42:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:42:51.312: INFO: Pod "webserver-deployment-847dcfb7fb-6k48t" is available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-6k48t webserver-deployment-847dcfb7fb- deployment-9709 e6c95348-6e47-4704-bbd9-9122c559ea82 45539 0 2021-10-27 15:42:38 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/containerID:76a5e4b906679ade6ecb4e89b636867a4590ea8dda2ca7aeef15da4b7e98806e cni.projectcalico.org/podIP:100.96.0.122/32 cni.projectcalico.org/podIPs:100.96.0.122/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb c0b26056-f3f1-4d17-aa8f-42a4f749e628 0xc004a95730 0xc004a95731}] [] [{kube-controller-manager Update v1 2021-10-27 15:42:38 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c0b26056-f3f1-4d17-aa8f-42a4f749e628\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-27 15:42:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-27 15:42:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.0.122\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-c5p2z,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmtq6-2hp.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c5p2z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.2,PodIP:100.96.0.122,StartTime:2021-10-27 15:42:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-27 15:42:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://acc9e6c354de0b890ff120019882a343d5d4d42c744c74ed74c647b8dc42a8d2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.0.122,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:42:51.312: INFO: Pod "webserver-deployment-847dcfb7fb-77z4h" is not available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-77z4h webserver-deployment-847dcfb7fb- deployment-9709 c2727669-0814-42a8-acbb-f32bff0decf7 45663 0 2021-10-27 15:42:49 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb c0b26056-f3f1-4d17-aa8f-42a4f749e628 0xc004a95920 0xc004a95921}] [] [{kube-controller-manager Update v1 2021-10-27 15:42:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c0b26056-f3f1-4d17-aa8f-42a4f749e628\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:42:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-q45b7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmtq6-2hp.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q45b7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.3,PodIP:,StartTime:2021-10-27 15:42:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:42:51.312: INFO: Pod "webserver-deployment-847dcfb7fb-7x4tc" is not available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-7x4tc webserver-deployment-847dcfb7fb- deployment-9709 a734c30d-56b9-4a57-a501-53baed836e8f 45728 0 2021-10-27 15:42:49 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb c0b26056-f3f1-4d17-aa8f-42a4f749e628 0xc004a95ad0 0xc004a95ad1}] [] [{kube-controller-manager Update v1 2021-10-27 15:42:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c0b26056-f3f1-4d17-aa8f-42a4f749e628\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:42:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-ml989,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmtq6-2hp.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ml989,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.3,PodIP:,StartTime:2021-10-27 15:42:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:42:51.313: INFO: Pod "webserver-deployment-847dcfb7fb-8bm9j" is not available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-8bm9j webserver-deployment-847dcfb7fb- deployment-9709 d72fd6df-5450-4bab-8378-d60e3cfc00e3 45687 0 2021-10-27 15:42:49 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb c0b26056-f3f1-4d17-aa8f-42a4f749e628 0xc004a95c80 0xc004a95c81}] [] [{kube-controller-manager Update v1 2021-10-27 15:42:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c0b26056-f3f1-4d17-aa8f-42a4f749e628\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:42:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-5cgz5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmtq6-2hp.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5cgz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.3,PodIP:,StartTime:2021-10-27 15:42:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:42:51.313: INFO: Pod "webserver-deployment-847dcfb7fb-8jzjr" is not available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-8jzjr webserver-deployment-847dcfb7fb- deployment-9709 e115ab57-7ce7-477d-977f-84698323df0f 45729 0 2021-10-27 15:42:49 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb c0b26056-f3f1-4d17-aa8f-42a4f749e628 0xc004a95e30 0xc004a95e31}] [] [{kube-controller-manager Update v1 2021-10-27 15:42:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c0b26056-f3f1-4d17-aa8f-42a4f749e628\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:42:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-cl8wr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmtq6-2hp.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cl8wr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.3,PodIP:,StartTime:2021-10-27 15:42:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:42:51.313: INFO: Pod "webserver-deployment-847dcfb7fb-dd9wr" is not available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-dd9wr webserver-deployment-847dcfb7fb- deployment-9709 24a6c261-8be0-4ee9-89d6-02db33239bf2 45680 0 2021-10-27 15:42:49 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb c0b26056-f3f1-4d17-aa8f-42a4f749e628 0xc004a95fe0 0xc004a95fe1}] [] [{kube-controller-manager Update v1 2021-10-27 15:42:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c0b26056-f3f1-4d17-aa8f-42a4f749e628\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:42:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-74gfd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmtq6-2hp.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-74gfd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.2,PodIP:,StartTime:2021-10-27 15:42:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:42:51.313: INFO: Pod "webserver-deployment-847dcfb7fb-fs2pg" is not available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-fs2pg webserver-deployment-847dcfb7fb- deployment-9709 c53e7d6f-18a6-4891-b685-e24ebd62349e 45724 0 2021-10-27 15:42:49 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb c0b26056-f3f1-4d17-aa8f-42a4f749e628 0xc0075262e0 0xc0075262e1}] [] [{kube-controller-manager Update v1 2021-10-27 15:42:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c0b26056-f3f1-4d17-aa8f-42a4f749e628\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:42:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-vwwpc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmtq6-2hp.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vwwpc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.2,PodIP:,StartTime:2021-10-27 15:42:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:42:51.313: INFO: Pod "webserver-deployment-847dcfb7fb-hcrfh" is not available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-hcrfh webserver-deployment-847dcfb7fb- deployment-9709 0bc55677-b50a-44ae-b826-c816d04faa72 45727 0 2021-10-27 15:42:49 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb c0b26056-f3f1-4d17-aa8f-42a4f749e628 0xc007526540 0xc007526541}] [] [{kube-controller-manager Update v1 2021-10-27 15:42:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c0b26056-f3f1-4d17-aa8f-42a4f749e628\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:42:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-k8jpv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmtq6-2hp.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k8jpv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.2,PodIP:,StartTime:2021-10-27 15:42:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:42:51.313: INFO: Pod "webserver-deployment-847dcfb7fb-hm7kv" is not available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-hm7kv webserver-deployment-847dcfb7fb- deployment-9709 4e3fa4d5-ef58-4c0f-ab80-d333be08bcb4 45705 0 2021-10-27 15:42:49 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb c0b26056-f3f1-4d17-aa8f-42a4f749e628 0xc007526700 0xc007526701}] [] [{kube-controller-manager Update v1 2021-10-27 15:42:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c0b26056-f3f1-4d17-aa8f-42a4f749e628\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:42:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-z5xlb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmtq6-2hp.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z5xlb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.2,PodIP:,StartTime:2021-10-27 15:42:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:42:51.314: INFO: Pod "webserver-deployment-847dcfb7fb-kl5qz" is available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-kl5qz webserver-deployment-847dcfb7fb- deployment-9709 8025a540-b1e2-44dd-98e7-16b9f8cacadc 45579 0 2021-10-27 15:42:38 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/containerID:6b518d1626607da0153028584f7383e9e9fcd433aadd80f27fe82c8c68c4cc0a cni.projectcalico.org/podIP:100.96.1.107/32 cni.projectcalico.org/podIPs:100.96.1.107/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb c0b26056-f3f1-4d17-aa8f-42a4f749e628 0xc0075268d0 0xc0075268d1}] [] [{kube-controller-manager Update v1 2021-10-27 15:42:38 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c0b26056-f3f1-4d17-aa8f-42a4f749e628\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-27 15:42:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-27 15:42:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.1.107\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-6jd44,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmtq6-2hp.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6jd44,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.3,PodIP:100.96.1.107,StartTime:2021-10-27 15:42:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-27 15:42:43 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://6e4ddfed8c719658b3f67c3f3a4dbce4db037fe764dbd273305992a8a7f1b166,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.1.107,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:42:51.314: INFO: Pod "webserver-deployment-847dcfb7fb-m8vfk" is not available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-m8vfk webserver-deployment-847dcfb7fb- deployment-9709 2aba2019-a2ba-440f-ad01-eaca5431d6f4 45712 0 2021-10-27 15:42:49 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb c0b26056-f3f1-4d17-aa8f-42a4f749e628 0xc007526d20 0xc007526d21}] [] [{kube-controller-manager Update v1 2021-10-27 15:42:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c0b26056-f3f1-4d17-aa8f-42a4f749e628\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:42:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-64h67,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmtq6-2hp.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-64h67,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.3,PodIP:,StartTime:2021-10-27 15:42:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:42:51.314: INFO: Pod "webserver-deployment-847dcfb7fb-nn5v5" is not available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-nn5v5 webserver-deployment-847dcfb7fb- deployment-9709 5bb4d814-f6d6-4587-9c27-dd7f5dac16c6 45722 0 2021-10-27 15:42:49 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb c0b26056-f3f1-4d17-aa8f-42a4f749e628 0xc007526ee0 0xc007526ee1}] [] [{kube-controller-manager Update v1 2021-10-27 15:42:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c0b26056-f3f1-4d17-aa8f-42a4f749e628\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-27 15:42:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-fb4fm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmtq6-2hp.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fb4fm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.2,PodIP:,StartTime:2021-10-27 15:42:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:42:51.314: INFO: Pod "webserver-deployment-847dcfb7fb-rxsqw" is available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-rxsqw webserver-deployment-847dcfb7fb- deployment-9709 a899bc65-28ed-4d74-b0f6-c9d549bd6441 45542 0 2021-10-27 15:42:38 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/containerID:020b31e57fb18fd1dd65a781f2eeafac14ec4b9cba472b560fbdd7d88576bd09 cni.projectcalico.org/podIP:100.96.0.120/32 cni.projectcalico.org/podIPs:100.96.0.120/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb c0b26056-f3f1-4d17-aa8f-42a4f749e628 0xc0075270b0 0xc0075270b1}] [] [{kube-controller-manager Update v1 2021-10-27 15:42:38 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c0b26056-f3f1-4d17-aa8f-42a4f749e628\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-27 15:42:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-27 15:42:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.0.120\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-tk88j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmtq6-2hp.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tk88j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.2,PodIP:100.96.0.120,StartTime:2021-10-27 15:42:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-27 15:42:41 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://4d9c61e3cb557633e37cea247bdb4f521941236bb84fbf40dada1e0df3956ea3,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.0.120,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:42:51.314: INFO: Pod "webserver-deployment-847dcfb7fb-t5nm4" is available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-t5nm4 webserver-deployment-847dcfb7fb- deployment-9709 d9be7ffc-b082-403b-ac8d-e72f87d9a6d1 45573 0 2021-10-27 15:42:38 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/containerID:7660b605ad1e06b2e0a37c4f96c6e6639d9205389b8bfa05e90ebf6de428cdf2 cni.projectcalico.org/podIP:100.96.1.108/32 cni.projectcalico.org/podIPs:100.96.1.108/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb c0b26056-f3f1-4d17-aa8f-42a4f749e628 0xc0075272c0 0xc0075272c1}] [] [{kube-controller-manager Update v1 2021-10-27 15:42:38 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c0b26056-f3f1-4d17-aa8f-42a4f749e628\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-27 15:42:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-27 15:42:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.1.108\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-tcmqx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmtq6-2hp.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tcmqx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.3,PodIP:100.96.1.108,StartTime:2021-10-27 15:42:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-27 15:42:44 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://97bc1c143fb3d47a94b10a895b32f2225e18da4228cde3c88e043a8630d8a173,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.1.108,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:42:51.315: INFO: Pod "webserver-deployment-847dcfb7fb-tcrzp" is available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-tcrzp webserver-deployment-847dcfb7fb- deployment-9709 3cf32820-7d91-4f88-8488-f8d00e984138 45576 0 2021-10-27 15:42:38 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/containerID:04be3a2846d32a14a00dfa3c1e749e5c51fb27499396eefb98edb8c8259745b4 cni.projectcalico.org/podIP:100.96.1.109/32 cni.projectcalico.org/podIPs:100.96.1.109/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb c0b26056-f3f1-4d17-aa8f-42a4f749e628 0xc0075274d0 0xc0075274d1}] [] [{kube-controller-manager Update v1 2021-10-27 15:42:38 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c0b26056-f3f1-4d17-aa8f-42a4f749e628\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-27 15:42:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-27 15:42:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.1.109\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-d7vkn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmtq6-2hp.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-d7vkn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.3,PodIP:100.96.1.109,StartTime:2021-10-27 15:42:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-27 15:42:44 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://fb0d0c66d77c3df06ab4894b64d0b783eb0dc5c8e596ed9db5172605a78d58f6,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.1.109,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 27 15:42:51.315: INFO: Pod "webserver-deployment-847dcfb7fb-xzllx" is available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-xzllx webserver-deployment-847dcfb7fb- deployment-9709 3f1947a5-468c-4944-892a-ac911e606986 45563 0 2021-10-27 15:42:38 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/containerID:8f975ca69ae936ffcd6c804af40650d7815721ae3cc04fda447757a33ccbdb32 cni.projectcalico.org/podIP:100.96.1.106/32 cni.projectcalico.org/podIPs:100.96.1.106/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb c0b26056-f3f1-4d17-aa8f-42a4f749e628 0xc0075276e0 0xc0075276e1}] [] [{kube-controller-manager Update v1 2021-10-27 15:42:38 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c0b26056-f3f1-4d17-aa8f-42a4f749e628\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-27 15:42:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-27 15:42:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.1.106\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-x5vdt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmtq6-2hp.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x5vdt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmtq6-2hp-worker-1-z1-7b584-zb9xc,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-27 15:42:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.0.3,PodIP:100.96.1.106,StartTime:2021-10-27 15:42:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-27 15:42:43 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://f3acab8df60620fcee4eeb1648eae49b492c137dfbb53d0c9bed3ba424fd3a87,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.1.106,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:42:51.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-9709" for this suite. +•{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":346,"completed":341,"skipped":6052,"failed":0} +SSS +------------------------------ +[sig-node] Pods Extended Pods Set QOS Class + should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Pods Extended + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:42:51.340: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename pods +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-8004 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] Pods Set QOS Class + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:149 +[It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pod +STEP: submitting the pod to kubernetes +STEP: verifying QOS class is set on the pod +[AfterEach] [sig-node] Pods Extended + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:42:51.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-8004" for this suite. +•{"msg":"PASSED [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":346,"completed":342,"skipped":6055,"failed":0} + +------------------------------ +[sig-storage] Downward API volume + should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:42:51.585: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-6216 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 27 15:42:51.792: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0f680e39-ce3d-4244-b1bf-0785030fb7ed" in namespace "downward-api-6216" to be "Succeeded or Failed" +Oct 27 15:42:51.803: INFO: Pod "downwardapi-volume-0f680e39-ce3d-4244-b1bf-0785030fb7ed": Phase="Pending", Reason="", readiness=false. Elapsed: 11.3489ms +Oct 27 15:42:53.814: INFO: Pod "downwardapi-volume-0f680e39-ce3d-4244-b1bf-0785030fb7ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022818307s +Oct 27 15:42:55.828: INFO: Pod "downwardapi-volume-0f680e39-ce3d-4244-b1bf-0785030fb7ed": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035945562s +Oct 27 15:42:57.841: INFO: Pod "downwardapi-volume-0f680e39-ce3d-4244-b1bf-0785030fb7ed": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049410864s +Oct 27 15:42:59.854: INFO: Pod "downwardapi-volume-0f680e39-ce3d-4244-b1bf-0785030fb7ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.06238484s +STEP: Saw pod success +Oct 27 15:42:59.854: INFO: Pod "downwardapi-volume-0f680e39-ce3d-4244-b1bf-0785030fb7ed" satisfied condition "Succeeded or Failed" +Oct 27 15:42:59.865: INFO: Trying to get logs from node shoot--it--tmtq6-2hp-worker-1-z1-7b584-sq9d5 pod downwardapi-volume-0f680e39-ce3d-4244-b1bf-0785030fb7ed container client-container: +STEP: delete the pod +Oct 27 15:42:59.923: INFO: Waiting for pod downwardapi-volume-0f680e39-ce3d-4244-b1bf-0785030fb7ed to disappear +Oct 27 15:42:59.935: INFO: Pod downwardapi-volume-0f680e39-ce3d-4244-b1bf-0785030fb7ed no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:42:59.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-6216" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":343,"skipped":6055,"failed":0} +SSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should mutate custom resource with different stored version [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:42:59.968: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-7713 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 27 15:43:00.630: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770946180, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770946180, loc:(*time.Location)(0xa09bc80)}}, Reason:"NewReplicaSetCreated", Message:"Created new replica set \"sample-webhook-deployment-78988fc6cd\""}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770946180, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770946180, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} +Oct 27 15:43:02.643: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770946180, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770946180, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770946180, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770946180, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 27 15:43:05.662: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should mutate custom resource with different stored version [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 27 15:43:05.673: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Registering the mutating webhook for custom resource e2e-test-webhook-7206-crds.webhook.example.com via the AdmissionRegistration API +STEP: Creating a custom resource while v1 is storage version +STEP: Patching Custom Resource Definition to set v2 as storage +STEP: Patching the custom resource while v2 is storage version +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:43:08.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-7713" for this suite. +STEP: Destroying namespace "webhook-7713-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":346,"completed":344,"skipped":6059,"failed":0} +SSSSSS +------------------------------ +[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook + should execute prestop http hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Container Lifecycle Hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:43:08.832: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-lifecycle-hook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-lifecycle-hook-9842 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] when create a pod with lifecycle hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 +STEP: create the container to handle the HTTPGet hook request. +Oct 27 15:43:09.053: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:43:11.133: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:43:13.066: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) +[It] should execute prestop http hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the pod with lifecycle hook +Oct 27 15:43:13.106: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) +Oct 27 15:43:15.119: INFO: The status of Pod pod-with-prestop-http-hook is Running (Ready = true) +STEP: delete the pod with lifecycle hook +Oct 27 15:43:15.143: INFO: Waiting for pod pod-with-prestop-http-hook to disappear +Oct 27 15:43:15.154: INFO: Pod pod-with-prestop-http-hook still exists +Oct 27 15:43:17.154: INFO: Waiting for pod pod-with-prestop-http-hook to disappear +Oct 27 15:43:17.166: INFO: Pod pod-with-prestop-http-hook still exists +Oct 27 15:43:19.155: INFO: Waiting for pod pod-with-prestop-http-hook to disappear +Oct 27 15:43:19.167: INFO: Pod pod-with-prestop-http-hook no longer exists +STEP: check prestop hook +[AfterEach] [sig-node] Container Lifecycle Hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:43:26.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-lifecycle-hook-9842" for this suite. +•{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":346,"completed":345,"skipped":6065,"failed":0} +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should be able to change the type from ClusterIP to ExternalName [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 27 15:43:26.269: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-2693 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should be able to change the type from ClusterIP to ExternalName [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-2693 +STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service +STEP: creating service externalsvc in namespace services-2693 +STEP: creating replication controller externalsvc in namespace services-2693 +I1027 15:43:26.531193 5683 runners.go:190] Created replication controller with name: externalsvc, namespace: services-2693, replica count: 2 +I1027 15:43:29.583115 5683 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +STEP: changing the ClusterIP service to type=ExternalName +Oct 27 15:43:29.625: INFO: Creating new exec pod +Oct 27 15:43:33.666: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmtq6-2hp.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-2693 exec execpodg4hhf -- /bin/sh -x -c nslookup clusterip-service.services-2693.svc.cluster.local' +Oct 27 15:43:34.052: INFO: stderr: "+ nslookup clusterip-service.services-2693.svc.cluster.local\n" +Oct 27 15:43:34.052: INFO: stdout: "Server:\t\t100.64.0.10\nAddress:\t100.64.0.10#53\n\nclusterip-service.services-2693.svc.cluster.local\tcanonical name = externalsvc.services-2693.svc.cluster.local.\nName:\texternalsvc.services-2693.svc.cluster.local\nAddress: 100.68.240.222\n\n" +STEP: deleting ReplicationController externalsvc in namespace services-2693, will wait for the garbage collector to delete the pods +Oct 27 15:43:34.126: INFO: Deleting ReplicationController externalsvc took: 12.643137ms +Oct 27 15:43:34.227: INFO: Terminating ReplicationController externalsvc pods took: 100.486658ms +Oct 27 15:43:36.149: INFO: Cleaning up the ClusterIP to ExternalName test service +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 27 15:43:36.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-2693" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":346,"completed":346,"skipped":6086,"failed":0} +Oct 27 15:43:36.195: INFO: Running AfterSuite actions on all nodes +Oct 27 15:43:36.195: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2 +Oct 27 15:43:36.195: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 +Oct 27 15:43:36.195: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2 +Oct 27 15:43:36.195: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 +Oct 27 15:43:36.195: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 +Oct 27 15:43:36.195: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 +Oct 27 15:43:36.195: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 +Oct 27 15:43:36.195: INFO: Running AfterSuite actions on node 1 +Oct 27 15:43:36.195: INFO: Skipping dumping logs from cluster + +JUnit report was created: /tmp/e2e/artifacts/1635343386/junit_01.xml +{"msg":"Test Suite completed","total":346,"completed":346,"skipped":6086,"failed":0} + +Ran 346 of 6432 Specs in 6027.288 seconds +SUCCESS! -- 346 Passed | 0 Failed | 0 Flaked | 0 Pending | 6086 Skipped +PASS + +Ginkgo ran 1 suite in 1h40m29.470717095s +Test Suite Passed diff --git a/v1.22/gardener-gcp/junit_01.xml b/v1.22/gardener-gcp/junit_01.xml new file mode 100644 index 0000000000..a3d867326d --- /dev/null +++ b/v1.22/gardener-gcp/junit_01.xml @@ -0,0 +1,18607 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/v1.22/gardener-openstack/PRODUCT.yaml b/v1.22/gardener-openstack/PRODUCT.yaml new file mode 100644 index 0000000000..5a37cf3803 --- /dev/null +++ b/v1.22/gardener-openstack/PRODUCT.yaml @@ -0,0 +1,9 @@ +vendor: SAP +name: Gardener (https://github.com/gardener/gardener) shoot cluster deployed on OPENSTACK +version: v1.34.0 +website_url: https://gardener.cloud +repo_url: https://github.com/gardener/ +documentation_url: https://github.com/gardener/documentation/wiki +product_logo_url: https://raw.githubusercontent.com/gardener/documentation/master/images/logo_w_saplogo.svg +type: installer +description: The Gardener implements automated management and operation of Kubernetes clusters as a service and aims to support that service on multiple Cloud providers. \ No newline at end of file diff --git a/v1.22/gardener-openstack/README.md b/v1.22/gardener-openstack/README.md new file mode 100644 index 0000000000..647dbcb2f7 --- /dev/null +++ b/v1.22/gardener-openstack/README.md @@ -0,0 +1,80 @@ +# Reproducing the test results: + +## Install Gardener on your Kubernetes Landscape +Check out https://github.com/gardener/garden-setup for a more detailed instruction and additional information. To install Gardener in your base cluster, a command line tool [sow](https://github.com/gardener/sow) is used. Use the provided Docker image that already contains `sow` and all required tools. To execute `sow` you call a [wrapper script](https://github.com/gardener/sow/tree/master/docker/bin) which starts `sow` in a Docker container (Docker will download the image from [eu.gcr.io/gardener-project/sow](http://eu.gcr.io/gardener-project/sow) if it is not available locally yet). Docker executes the sow command with the given arguments, and mounts parts of your file system into that container so that `sow` can read configuration files for the installation of Gardener components, and can persist the state of your installation. After `sow`'s execution Docker removes the container again. + +1. Clone the `sow` repository and add the path to our [wrapper script](https://github.com/gardener/sow/tree/master/docker/bin) to your `PATH` variable so you can call `sow` on the command line. + + ```bash + # setup for calling sow via the wrapper + git clone "https://github.com/gardener/sow" + cd sow + export PATH=$PATH:$PWD/docker/bin + ``` + +2. Create a directory `landscape` for your Gardener landscape and clone this repository into a subdirectory called `crop`: + + ```bash + cd .. + mkdir landscape + cd landscape + git clone "https://github.com/gardener/garden-setup" crop + ``` + +3. If you don't have your `kubekonfig` stored locally somewhere yet, download it. For example, for GKE you would use the following command: + + ```bash + gcloud container clusters get-credentials --zone --project + ``` + +4. Save your `kubeconfig` somewhere in your `landscape` directory. For the remaining steps we will assume that you saved it using file path `landscape/kubeconfig`. + +5. In your `landscape` directory, create a configuration file called `acre.yaml`. The structure of the configuration file is described [below](#configuration-file-acreyaml). Note that the relative file path `./kubeconfig` file must be specified in field `landscape.cluster.kubeconfig` in the configuration file. Checkout [configuration file acre](https://github.com/gardener/garden-setup#configuration-file-acreyaml) for configuration details. + + > Do not use file `acre.yaml` in directory `crop`. This file is used internally by the installation tool. + +6. If you created the base cluster using GKE convert your `kubeconfig` file to one that uses basic authentication with Google-specific configuration parameters: + + ```bash + sow convertkubeconfig + ``` + When asked for credentials, enter the ones that the GKE dashboard shows when clicking on `show credentials`. + + `sow` will replace the file specified in `landscape.cluster.kubeconfig` of your `acre.yaml` file by a kubeconfig file that uses basic authentication. + +7. In your first terminal window, use the following command to check in which order the components will be installed. Nothing will be deployed yet and you can test this way if your syntax in `acre.yaml` is correct: + + ```bash + sow order -A + ``` + +8. If there are no error messages, use the following command to deploy Gardener on your base cluster: + + ```bash + sow deploy -A + ``` + +9. `sow` now starts to install Gardener in your base cluster. The installation can take about 30 minutes. `sow` prints out status messages to the terminal window so that you can check the status of the installation. The other terminal window will show the newly created Kubernetes resources after a while and if their deployment was successful. Wait until the last component is deployed and all created Kubernetes resources are in status `Running`. + +10. Use the following command to find out the URL of the Gardener dashboard. + + ```bash + sow url + ``` + + +## Create Kubernetes Cluster + +Login to SAP Gardener Dashboard to create a Kubernetes Clusters on Amazon Web Services, Microsoft Azure, Google Cloud Platform, Alibaba Cloud, or OpenStack cloud provider. + +## Launch E2E Conformance Tests +Set the `KUBECONFIG` as path to the kubeconfig file of your newly created cluster (you can find the kubeconfig e.g. in the Gardener dashboard). Follow the instructions below to run the Kubernetes e2e conformance tests. Adjust values for arguments `k8sVersion` and `cloudprovider` respective to your new cluster. + +```bash +#first set KUBECONFIG to your cluster +docker run -ti -e --rm -v $KUBECONFIG:/mye2e/shoot.config golang:1.13 bash +# run all commands below within container +go get github.com/gardener/test-infra; cd /go/src/github.com/gardener/test-infra +export GO111MODULE=on; export E2E_EXPORT_PATH=/tmp/export; export KUBECONFIG=/mye2e/shoot.config; export GINKGO_PARALLEL=false +go run -mod=vendor ./integration-tests/e2e --k8sVersion=1.17.1 --cloudprovider=gcp --testcasegroup="conformance" +``` \ No newline at end of file diff --git a/v1.22/gardener-openstack/e2e.log b/v1.22/gardener-openstack/e2e.log new file mode 100644 index 0000000000..2c8421b111 --- /dev/null +++ b/v1.22/gardener-openstack/e2e.log @@ -0,0 +1,13613 @@ +Conformance test: not doing test setup. +I1019 15:53:31.791827 4339 e2e.go:129] Starting e2e run "53c206ff-763e-4b70-8a0f-781602aa468c" on Ginkgo node 1 +{"msg":"Test Suite starting","total":346,"completed":0,"skipped":0,"failed":0} +Running Suite: Kubernetes e2e suite +=================================== +Random Seed: 1634658811 - Will randomize all specs +Will run 346 of 6432 specs + +Oct 19 15:53:33.496: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 19 15:53:33.497: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable +Oct 19 15:53:33.513: INFO: Waiting up to 10m0s for all pods (need at least 1) in namespace 'kube-system' to be running and ready +Oct 19 15:53:33.548: INFO: 24 / 24 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) +Oct 19 15:53:33.548: INFO: expected 12 pod replicas in namespace 'kube-system', 12 are Running and Ready. +Oct 19 15:53:33.548: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start +Oct 19 15:53:33.558: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'apiserver-proxy' (0 seconds elapsed) +Oct 19 15:53:33.558: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'calico-node' (0 seconds elapsed) +Oct 19 15:53:33.558: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'csi-driver-node' (0 seconds elapsed) +Oct 19 15:53:33.558: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) +Oct 19 15:53:33.558: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-exporter' (0 seconds elapsed) +Oct 19 15:53:33.558: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-problem-detector' (0 seconds elapsed) +Oct 19 15:53:33.558: INFO: e2e test version: v1.22.2 +Oct 19 15:53:33.560: INFO: kube-apiserver version: v1.22.2 +Oct 19 15:53:33.560: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 19 15:53:33.564: INFO: Cluster IP family: ipv4 +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicationController + should serve a basic image on each replica with a public image [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 15:53:33.564: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename replication-controller +W1019 15:53:33.590035 4339 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ +Oct 19 15:53:33.590: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled +Oct 19 15:53:33.603: INFO: PSP annotation exists on dry run pod: "extensions.gardener.cloud.provider-openstack.csi-driver-node"; assuming PodSecurityPolicy is enabled +W1019 15:53:33.605311 4339 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ +W1019 15:53:33.608065 4339 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ +Oct 19 15:53:33.616: INFO: Found ClusterRoles; assuming RBAC is enabled. +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replication-controller-1412 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 +[It] should serve a basic image on each replica with a public image [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating replication controller my-hostname-basic-3f782f61-327e-4cab-a2d7-9159a14b67bb +Oct 19 15:53:33.733: INFO: Pod name my-hostname-basic-3f782f61-327e-4cab-a2d7-9159a14b67bb: Found 0 pods out of 1 +Oct 19 15:53:38.738: INFO: Pod name my-hostname-basic-3f782f61-327e-4cab-a2d7-9159a14b67bb: Found 1 pods out of 1 +Oct 19 15:53:38.738: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-3f782f61-327e-4cab-a2d7-9159a14b67bb" are running +Oct 19 15:53:38.740: INFO: Pod "my-hostname-basic-3f782f61-327e-4cab-a2d7-9159a14b67bb-smmmk" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-19 15:53:33 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-19 15:53:37 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-19 15:53:37 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-19 15:53:33 +0000 UTC Reason: Message:}]) +Oct 19 15:53:38.740: INFO: Trying to dial the pod +Oct 19 15:53:43.803: INFO: Controller my-hostname-basic-3f782f61-327e-4cab-a2d7-9159a14b67bb: Got expected result from replica 1 [my-hostname-basic-3f782f61-327e-4cab-a2d7-9159a14b67bb-smmmk]: "my-hostname-basic-3f782f61-327e-4cab-a2d7-9159a14b67bb-smmmk", 1 of 1 required successes so far +[AfterEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 15:53:43.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replication-controller-1412" for this suite. +•{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":346,"completed":1,"skipped":21,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Container Runtime blackbox test on terminated container + should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 15:53:43.811: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-runtime +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-1406 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the container +STEP: wait for the container to reach Succeeded +STEP: get the container status +STEP: the container should be terminated +STEP: the termination message should be set +Oct 19 15:53:45.969: INFO: Expected: &{OK} to match Container's Termination Message: OK -- +STEP: delete the container +[AfterEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 15:53:45.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-runtime-1406" for this suite. +•{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":346,"completed":2,"skipped":76,"failed":0} +SSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 15:53:45.983: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename statefulset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-9803 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:92 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:107 +STEP: Creating service test in namespace statefulset-9803 +[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Initializing watcher for selector baz=blah,foo=bar +STEP: Creating stateful set ss in namespace statefulset-9803 +STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9803 +Oct 19 15:53:46.132: INFO: Found 0 stateful pods, waiting for 1 +Oct 19 15:53:56.140: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod +Oct 19 15:53:56.143: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-9803 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Oct 19 15:53:56.339: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Oct 19 15:53:56.339: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Oct 19 15:53:56.339: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Oct 19 15:53:56.343: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true +Oct 19 15:54:06.346: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false +Oct 19 15:54:06.347: INFO: Waiting for statefulset status.replicas updated to 0 +Oct 19 15:54:06.356: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.99999982s +Oct 19 15:54:07.363: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.99775112s +Oct 19 15:54:08.367: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.990786238s +Oct 19 15:54:09.370: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.987569048s +Oct 19 15:54:10.373: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.984346936s +Oct 19 15:54:11.384: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.981293184s +Oct 19 15:54:12.388: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.969965209s +Oct 19 15:54:13.391: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.966419733s +Oct 19 15:54:14.394: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.963360483s +Oct 19 15:54:15.397: INFO: Verifying statefulset ss doesn't scale past 1 for another 959.695047ms +STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9803 +Oct 19 15:54:16.401: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-9803 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 19 15:54:16.591: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Oct 19 15:54:16.591: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Oct 19 15:54:16.591: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Oct 19 15:54:16.594: INFO: Found 1 stateful pods, waiting for 3 +Oct 19 15:54:26.601: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +Oct 19 15:54:26.601: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true +Oct 19 15:54:26.601: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true +STEP: Verifying that stateful set ss was scaled up in order +STEP: Scale down will halt with unhealthy stateful pod +Oct 19 15:54:26.608: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-9803 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Oct 19 15:54:26.879: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Oct 19 15:54:26.879: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Oct 19 15:54:26.879: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Oct 19 15:54:26.879: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-9803 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Oct 19 15:54:27.079: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Oct 19 15:54:27.079: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Oct 19 15:54:27.079: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Oct 19 15:54:27.079: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-9803 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Oct 19 15:54:27.288: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Oct 19 15:54:27.288: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Oct 19 15:54:27.288: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Oct 19 15:54:27.288: INFO: Waiting for statefulset status.replicas updated to 0 +Oct 19 15:54:27.290: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 +Oct 19 15:54:37.296: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false +Oct 19 15:54:37.296: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false +Oct 19 15:54:37.296: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false +Oct 19 15:54:37.303: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999834s +Oct 19 15:54:38.307: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.997374977s +Oct 19 15:54:39.311: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.993509317s +Oct 19 15:54:40.315: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.988928161s +Oct 19 15:54:41.319: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.985350501s +Oct 19 15:54:42.323: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.981753589s +Oct 19 15:54:43.330: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.976953282s +Oct 19 15:54:44.333: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.971066548s +Oct 19 15:54:45.338: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.967415074s +Oct 19 15:54:46.342: INFO: Verifying statefulset ss doesn't scale past 3 for another 962.61231ms +STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9803 +Oct 19 15:54:47.346: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-9803 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 19 15:54:47.663: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Oct 19 15:54:47.663: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Oct 19 15:54:47.663: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Oct 19 15:54:47.663: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-9803 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 19 15:54:47.873: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Oct 19 15:54:47.873: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Oct 19 15:54:47.873: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Oct 19 15:54:47.873: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-9803 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 19 15:54:48.138: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Oct 19 15:54:48.139: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Oct 19 15:54:48.139: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Oct 19 15:54:48.139: INFO: Scaling statefulset ss to 0 +STEP: Verifying that stateful set ss was scaled down in reverse order +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118 +Oct 19 15:54:58.150: INFO: Deleting all statefulset in ns statefulset-9803 +Oct 19 15:54:58.153: INFO: Scaling statefulset ss to 0 +Oct 19 15:54:58.163: INFO: Waiting for statefulset status.replicas updated to 0 +Oct 19 15:54:58.165: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 15:54:58.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-9803" for this suite. +•{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":346,"completed":3,"skipped":80,"failed":0} +SSSSS +------------------------------ +[sig-cli] Kubectl client Guestbook application + should create and stop a working application [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 15:54:58.182: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-6419 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should create and stop a working application [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating all guestbook components +Oct 19 15:54:58.319: INFO: apiVersion: v1 +kind: Service +metadata: + name: agnhost-replica + labels: + app: agnhost + role: replica + tier: backend +spec: + ports: + - port: 6379 + selector: + app: agnhost + role: replica + tier: backend + +Oct 19 15:54:58.319: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-6419 create -f -' +Oct 19 15:54:58.470: INFO: stderr: "" +Oct 19 15:54:58.470: INFO: stdout: "service/agnhost-replica created\n" +Oct 19 15:54:58.470: INFO: apiVersion: v1 +kind: Service +metadata: + name: agnhost-primary + labels: + app: agnhost + role: primary + tier: backend +spec: + ports: + - port: 6379 + targetPort: 6379 + selector: + app: agnhost + role: primary + tier: backend + +Oct 19 15:54:58.470: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-6419 create -f -' +Oct 19 15:54:58.589: INFO: stderr: "" +Oct 19 15:54:58.589: INFO: stdout: "service/agnhost-primary created\n" +Oct 19 15:54:58.589: INFO: apiVersion: v1 +kind: Service +metadata: + name: frontend + labels: + app: guestbook + tier: frontend +spec: + # if your cluster supports it, uncomment the following to automatically create + # an external load-balanced IP for the frontend service. + # type: LoadBalancer + ports: + - port: 80 + selector: + app: guestbook + tier: frontend + +Oct 19 15:54:58.589: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-6419 create -f -' +Oct 19 15:54:58.705: INFO: stderr: "" +Oct 19 15:54:58.705: INFO: stdout: "service/frontend created\n" +Oct 19 15:54:58.705: INFO: apiVersion: apps/v1 +kind: Deployment +metadata: + name: frontend +spec: + replicas: 3 + selector: + matchLabels: + app: guestbook + tier: frontend + template: + metadata: + labels: + app: guestbook + tier: frontend + spec: + containers: + - name: guestbook-frontend + image: k8s.gcr.io/e2e-test-images/agnhost:2.32 + args: [ "guestbook", "--backend-port", "6379" ] + resources: + requests: + cpu: 100m + memory: 100Mi + ports: + - containerPort: 80 + +Oct 19 15:54:58.705: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-6419 create -f -' +Oct 19 15:54:58.826: INFO: stderr: "" +Oct 19 15:54:58.826: INFO: stdout: "deployment.apps/frontend created\n" +Oct 19 15:54:58.826: INFO: apiVersion: apps/v1 +kind: Deployment +metadata: + name: agnhost-primary +spec: + replicas: 1 + selector: + matchLabels: + app: agnhost + role: primary + tier: backend + template: + metadata: + labels: + app: agnhost + role: primary + tier: backend + spec: + containers: + - name: primary + image: k8s.gcr.io/e2e-test-images/agnhost:2.32 + args: [ "guestbook", "--http-port", "6379" ] + resources: + requests: + cpu: 100m + memory: 100Mi + ports: + - containerPort: 6379 + +Oct 19 15:54:58.826: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-6419 create -f -' +Oct 19 15:54:58.947: INFO: stderr: "" +Oct 19 15:54:58.947: INFO: stdout: "deployment.apps/agnhost-primary created\n" +Oct 19 15:54:58.947: INFO: apiVersion: apps/v1 +kind: Deployment +metadata: + name: agnhost-replica +spec: + replicas: 2 + selector: + matchLabels: + app: agnhost + role: replica + tier: backend + template: + metadata: + labels: + app: agnhost + role: replica + tier: backend + spec: + containers: + - name: replica + image: k8s.gcr.io/e2e-test-images/agnhost:2.32 + args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] + resources: + requests: + cpu: 100m + memory: 100Mi + ports: + - containerPort: 6379 + +Oct 19 15:54:58.947: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-6419 create -f -' +Oct 19 15:54:59.059: INFO: stderr: "" +Oct 19 15:54:59.059: INFO: stdout: "deployment.apps/agnhost-replica created\n" +STEP: validating guestbook app +Oct 19 15:54:59.059: INFO: Waiting for all frontend pods to be Running. +Oct 19 15:55:04.109: INFO: Waiting for frontend to serve content. +Oct 19 15:55:04.168: INFO: Trying to add a new entry to the guestbook. +Oct 19 15:55:04.225: INFO: Verifying that added entry can be retrieved. +STEP: using delete to clean up resources +Oct 19 15:55:04.274: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-6419 delete --grace-period=0 --force -f -' +Oct 19 15:55:04.327: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Oct 19 15:55:04.327: INFO: stdout: "service \"agnhost-replica\" force deleted\n" +STEP: using delete to clean up resources +Oct 19 15:55:04.327: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-6419 delete --grace-period=0 --force -f -' +Oct 19 15:55:04.379: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Oct 19 15:55:04.379: INFO: stdout: "service \"agnhost-primary\" force deleted\n" +STEP: using delete to clean up resources +Oct 19 15:55:04.379: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-6419 delete --grace-period=0 --force -f -' +Oct 19 15:55:04.429: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Oct 19 15:55:04.429: INFO: stdout: "service \"frontend\" force deleted\n" +STEP: using delete to clean up resources +Oct 19 15:55:04.429: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-6419 delete --grace-period=0 --force -f -' +Oct 19 15:55:04.477: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Oct 19 15:55:04.477: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" +STEP: using delete to clean up resources +Oct 19 15:55:04.477: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-6419 delete --grace-period=0 --force -f -' +Oct 19 15:55:04.524: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Oct 19 15:55:04.524: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" +STEP: using delete to clean up resources +Oct 19 15:55:04.524: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-6419 delete --grace-period=0 --force -f -' +Oct 19 15:55:04.571: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Oct 19 15:55:04.571: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 15:55:04.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-6419" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":346,"completed":4,"skipped":85,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook + should execute prestop exec hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Container Lifecycle Hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 15:55:04.579: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-lifecycle-hook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-lifecycle-hook-1646 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] when create a pod with lifecycle hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 +STEP: create the container to handle the HTTPGet hook request. +Oct 19 15:55:04.724: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) +Oct 19 15:55:06.727: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) +[It] should execute prestop exec hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the pod with lifecycle hook +Oct 19 15:55:06.739: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) +Oct 19 15:55:08.743: INFO: The status of Pod pod-with-prestop-exec-hook is Running (Ready = true) +STEP: delete the pod with lifecycle hook +Oct 19 15:55:08.750: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Oct 19 15:55:08.752: INFO: Pod pod-with-prestop-exec-hook still exists +Oct 19 15:55:10.753: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Oct 19 15:55:10.756: INFO: Pod pod-with-prestop-exec-hook still exists +Oct 19 15:55:12.752: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Oct 19 15:55:12.756: INFO: Pod pod-with-prestop-exec-hook no longer exists +STEP: check prestop hook +[AfterEach] [sig-node] Container Lifecycle Hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 15:55:12.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-lifecycle-hook-1646" for this suite. +•{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":346,"completed":5,"skipped":123,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + updates the published spec when one version gets renamed [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 15:55:12.774: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-1322 +STEP: Waiting for a default service account to be provisioned in namespace +[It] updates the published spec when one version gets renamed [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: set up a multi version CRD +Oct 19 15:55:12.909: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: rename a version +STEP: check the new version name is served +STEP: check the old version name is removed +STEP: check the other version is not changed +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 15:55:29.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-1322" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":346,"completed":6,"skipped":162,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] PodTemplates + should delete a collection of pod templates [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] PodTemplates + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 15:55:29.610: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename podtemplate +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in podtemplate-781 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should delete a collection of pod templates [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Create set of pod templates +Oct 19 15:55:29.750: INFO: created test-podtemplate-1 +Oct 19 15:55:29.753: INFO: created test-podtemplate-2 +Oct 19 15:55:29.755: INFO: created test-podtemplate-3 +STEP: get a list of pod templates with a label in the current namespace +STEP: delete collection of pod templates +Oct 19 15:55:29.757: INFO: requesting DeleteCollection of pod templates +STEP: check that the list of pod templates matches the requested quantity +Oct 19 15:55:29.766: INFO: requesting list of pod templates to confirm quantity +[AfterEach] [sig-node] PodTemplates + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 15:55:29.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "podtemplate-781" for this suite. +•{"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":346,"completed":7,"skipped":188,"failed":0} +S +------------------------------ +[sig-apps] Deployment + should run the lifecycle of a Deployment [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 15:55:29.773: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename deployment +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in deployment-7363 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 +[It] should run the lifecycle of a Deployment [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a Deployment +STEP: waiting for Deployment to be created +STEP: waiting for all Replicas to be Ready +Oct 19 15:55:29.912: INFO: observed Deployment test-deployment in namespace deployment-7363 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Oct 19 15:55:29.912: INFO: observed Deployment test-deployment in namespace deployment-7363 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Oct 19 15:55:29.914: INFO: observed Deployment test-deployment in namespace deployment-7363 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Oct 19 15:55:29.914: INFO: observed Deployment test-deployment in namespace deployment-7363 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Oct 19 15:55:29.919: INFO: observed Deployment test-deployment in namespace deployment-7363 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Oct 19 15:55:29.920: INFO: observed Deployment test-deployment in namespace deployment-7363 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Oct 19 15:55:29.943: INFO: observed Deployment test-deployment in namespace deployment-7363 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Oct 19 15:55:29.943: INFO: observed Deployment test-deployment in namespace deployment-7363 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Oct 19 15:55:30.814: INFO: observed Deployment test-deployment in namespace deployment-7363 with ReadyReplicas 1 and labels map[test-deployment-static:true] +Oct 19 15:55:30.814: INFO: observed Deployment test-deployment in namespace deployment-7363 with ReadyReplicas 1 and labels map[test-deployment-static:true] +Oct 19 15:55:31.248: INFO: observed Deployment test-deployment in namespace deployment-7363 with ReadyReplicas 2 and labels map[test-deployment-static:true] +STEP: patching the Deployment +Oct 19 15:55:31.254: INFO: observed event type ADDED +STEP: waiting for Replicas to scale +Oct 19 15:55:31.256: INFO: observed Deployment test-deployment in namespace deployment-7363 with ReadyReplicas 0 +Oct 19 15:55:31.256: INFO: observed Deployment test-deployment in namespace deployment-7363 with ReadyReplicas 0 +Oct 19 15:55:31.256: INFO: observed Deployment test-deployment in namespace deployment-7363 with ReadyReplicas 0 +Oct 19 15:55:31.256: INFO: observed Deployment test-deployment in namespace deployment-7363 with ReadyReplicas 0 +Oct 19 15:55:31.256: INFO: observed Deployment test-deployment in namespace deployment-7363 with ReadyReplicas 0 +Oct 19 15:55:31.256: INFO: observed Deployment test-deployment in namespace deployment-7363 with ReadyReplicas 0 +Oct 19 15:55:31.256: INFO: observed Deployment test-deployment in namespace deployment-7363 with ReadyReplicas 0 +Oct 19 15:55:31.256: INFO: observed Deployment test-deployment in namespace deployment-7363 with ReadyReplicas 0 +Oct 19 15:55:31.256: INFO: observed Deployment test-deployment in namespace deployment-7363 with ReadyReplicas 1 +Oct 19 15:55:31.256: INFO: observed Deployment test-deployment in namespace deployment-7363 with ReadyReplicas 1 +Oct 19 15:55:31.256: INFO: observed Deployment test-deployment in namespace deployment-7363 with ReadyReplicas 2 +Oct 19 15:55:31.256: INFO: observed Deployment test-deployment in namespace deployment-7363 with ReadyReplicas 2 +Oct 19 15:55:31.256: INFO: observed Deployment test-deployment in namespace deployment-7363 with ReadyReplicas 2 +Oct 19 15:55:31.256: INFO: observed Deployment test-deployment in namespace deployment-7363 with ReadyReplicas 2 +Oct 19 15:55:31.273: INFO: observed Deployment test-deployment in namespace deployment-7363 with ReadyReplicas 2 +Oct 19 15:55:31.273: INFO: observed Deployment test-deployment in namespace deployment-7363 with ReadyReplicas 2 +Oct 19 15:55:31.308: INFO: observed Deployment test-deployment in namespace deployment-7363 with ReadyReplicas 2 +Oct 19 15:55:31.308: INFO: observed Deployment test-deployment in namespace deployment-7363 with ReadyReplicas 2 +Oct 19 15:55:31.330: INFO: observed Deployment test-deployment in namespace deployment-7363 with ReadyReplicas 1 +Oct 19 15:55:31.330: INFO: observed Deployment test-deployment in namespace deployment-7363 with ReadyReplicas 1 +Oct 19 15:55:32.819: INFO: observed Deployment test-deployment in namespace deployment-7363 with ReadyReplicas 2 +Oct 19 15:55:32.819: INFO: observed Deployment test-deployment in namespace deployment-7363 with ReadyReplicas 2 +Oct 19 15:55:32.833: INFO: observed Deployment test-deployment in namespace deployment-7363 with ReadyReplicas 1 +STEP: listing Deployments +Oct 19 15:55:32.836: INFO: Found test-deployment with labels: map[test-deployment:patched test-deployment-static:true] +STEP: updating the Deployment +Oct 19 15:55:32.842: INFO: observed Deployment test-deployment in namespace deployment-7363 with ReadyReplicas 1 +STEP: fetching the DeploymentStatus +Oct 19 15:55:32.850: INFO: observed Deployment test-deployment in namespace deployment-7363 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] +Oct 19 15:55:32.850: INFO: observed Deployment test-deployment in namespace deployment-7363 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] +Oct 19 15:55:32.881: INFO: observed Deployment test-deployment in namespace deployment-7363 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] +Oct 19 15:55:32.885: INFO: observed Deployment test-deployment in namespace deployment-7363 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] +Oct 19 15:55:32.897: INFO: observed Deployment test-deployment in namespace deployment-7363 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] +Oct 19 15:55:33.834: INFO: observed Deployment test-deployment in namespace deployment-7363 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] +Oct 19 15:55:33.840: INFO: observed Deployment test-deployment in namespace deployment-7363 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] +Oct 19 15:55:33.842: INFO: observed Deployment test-deployment in namespace deployment-7363 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] +Oct 19 15:55:33.846: INFO: observed Deployment test-deployment in namespace deployment-7363 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] +Oct 19 15:55:33.850: INFO: observed Deployment test-deployment in namespace deployment-7363 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] +Oct 19 15:55:35.249: INFO: observed Deployment test-deployment in namespace deployment-7363 with ReadyReplicas 3 and labels map[test-deployment:updated test-deployment-static:true] +STEP: patching the DeploymentStatus +STEP: fetching the DeploymentStatus +Oct 19 15:55:35.271: INFO: observed Deployment test-deployment in namespace deployment-7363 with ReadyReplicas 1 +Oct 19 15:55:35.271: INFO: observed Deployment test-deployment in namespace deployment-7363 with ReadyReplicas 1 +Oct 19 15:55:35.271: INFO: observed Deployment test-deployment in namespace deployment-7363 with ReadyReplicas 1 +Oct 19 15:55:35.271: INFO: observed Deployment test-deployment in namespace deployment-7363 with ReadyReplicas 1 +Oct 19 15:55:35.271: INFO: observed Deployment test-deployment in namespace deployment-7363 with ReadyReplicas 1 +Oct 19 15:55:35.271: INFO: observed Deployment test-deployment in namespace deployment-7363 with ReadyReplicas 2 +Oct 19 15:55:35.271: INFO: observed Deployment test-deployment in namespace deployment-7363 with ReadyReplicas 2 +Oct 19 15:55:35.271: INFO: observed Deployment test-deployment in namespace deployment-7363 with ReadyReplicas 2 +Oct 19 15:55:35.271: INFO: observed Deployment test-deployment in namespace deployment-7363 with ReadyReplicas 2 +Oct 19 15:55:35.272: INFO: observed Deployment test-deployment in namespace deployment-7363 with ReadyReplicas 2 +Oct 19 15:55:35.272: INFO: observed Deployment test-deployment in namespace deployment-7363 with ReadyReplicas 3 +STEP: deleting the Deployment +Oct 19 15:55:35.278: INFO: observed event type MODIFIED +Oct 19 15:55:35.278: INFO: observed event type MODIFIED +Oct 19 15:55:35.278: INFO: observed event type MODIFIED +Oct 19 15:55:35.278: INFO: observed event type MODIFIED +Oct 19 15:55:35.278: INFO: observed event type MODIFIED +Oct 19 15:55:35.278: INFO: observed event type MODIFIED +Oct 19 15:55:35.278: INFO: observed event type MODIFIED +Oct 19 15:55:35.278: INFO: observed event type MODIFIED +Oct 19 15:55:35.278: INFO: observed event type MODIFIED +Oct 19 15:55:35.279: INFO: observed event type MODIFIED +Oct 19 15:55:35.279: INFO: observed event type MODIFIED +Oct 19 15:55:35.279: INFO: observed event type MODIFIED +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 +Oct 19 15:55:35.282: INFO: Log out all the ReplicaSets if there is no deployment created +Oct 19 15:55:35.284: INFO: ReplicaSet "test-deployment-56c98d85f9": +&ReplicaSet{ObjectMeta:{test-deployment-56c98d85f9 deployment-7363 18470393-2189-47e6-9698-e725a865784c 5300 4 2021-10-19 15:55:31 +0000 UTC map[pod-template-hash:56c98d85f9 test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-deployment c431f6ab-e450-4787-9281-85cf2dd08c97 0xc005b28607 0xc005b28608}] [] [{kube-controller-manager Update apps/v1 2021-10-19 15:55:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c431f6ab-e450-4787-9281-85cf2dd08c97\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-19 15:55:35 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 56c98d85f9,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:56c98d85f9 test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/pause:3.5 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc005b286a0 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:4,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} + +Oct 19 15:55:35.288: INFO: pod: "test-deployment-56c98d85f9-7mtlp": +&Pod{ObjectMeta:{test-deployment-56c98d85f9-7mtlp test-deployment-56c98d85f9- deployment-7363 c1349bac-e1d1-4bf6-b157-48deb6c9a5d8 5296 0 2021-10-19 15:55:32 +0000 UTC 2021-10-19 15:55:34 +0000 UTC 0xc005aa82d0 map[pod-template-hash:56c98d85f9 test-deployment-static:true] map[cni.projectcalico.org/podIP:100.96.1.19/32 cni.projectcalico.org/podIPs:100.96.1.19/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet test-deployment-56c98d85f9 18470393-2189-47e6-9698-e725a865784c 0xc005aa8397 0xc005aa8398}] [] [{kube-controller-manager Update v1 2021-10-19 15:55:32 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"18470393-2189-47e6-9698-e725a865784c\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-19 15:55:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-19 15:55:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.1.19\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-4xt54,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:k8s.gcr.io/pause:3.5,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmhay-ddd.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4xt54,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmhay-ddd-worker-1-z1-67558-zh8gq,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 15:55:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 15:55:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 15:55:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 15:55:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.3.120,PodIP:100.96.1.19,StartTime:2021-10-19 15:55:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-19 15:55:34 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/pause:3.5,ImageID:k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07,ContainerID:containerd://f0aa1f270a85260ab53405413966e1719714464c78059c6f8f5bfc01dc7972b1,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.1.19,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + +Oct 19 15:55:35.288: INFO: pod: "test-deployment-56c98d85f9-s759x": +&Pod{ObjectMeta:{test-deployment-56c98d85f9-s759x test-deployment-56c98d85f9- deployment-7363 72dcbef0-7b58-4d59-87f7-472e07a04e0e 5298 0 2021-10-19 15:55:31 +0000 UTC 2021-10-19 15:55:36 +0000 UTC 0xc005aa85d0 map[pod-template-hash:56c98d85f9 test-deployment-static:true] map[cni.projectcalico.org/podIP:100.96.0.18/32 cni.projectcalico.org/podIPs:100.96.0.18/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet test-deployment-56c98d85f9 18470393-2189-47e6-9698-e725a865784c 0xc005aa8637 0xc005aa8638}] [] [{calico Update v1 2021-10-19 15:55:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kube-controller-manager Update v1 2021-10-19 15:55:31 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"18470393-2189-47e6-9698-e725a865784c\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-19 15:55:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.0.18\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-p662w,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:k8s.gcr.io/pause:3.5,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmhay-ddd.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p662w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 15:55:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 15:55:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 15:55:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 15:55:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.1.123,PodIP:100.96.0.18,StartTime:2021-10-19 15:55:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-19 15:55:32 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/pause:3.5,ImageID:k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07,ContainerID:containerd://186d71be0332a8af5935e81976a51b8b3f75ed0f35b2ad137962d1dc70240d18,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.0.18,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + +Oct 19 15:55:35.288: INFO: ReplicaSet "test-deployment-855f7994f9": +&ReplicaSet{ObjectMeta:{test-deployment-855f7994f9 deployment-7363 3e8bc6ef-a2ab-45cc-b29d-6fb2c352ca75 5219 3 2021-10-19 15:55:29 +0000 UTC map[pod-template-hash:855f7994f9 test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-deployment c431f6ab-e450-4787-9281-85cf2dd08c97 0xc005b28717 0xc005b28718}] [] [{kube-controller-manager Update apps/v1 2021-10-19 15:55:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c431f6ab-e450-4787-9281-85cf2dd08c97\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-19 15:55:32 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 855f7994f9,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:855f7994f9 test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc005b287b0 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} + +Oct 19 15:55:35.293: INFO: ReplicaSet "test-deployment-d4dfddfbf": +&ReplicaSet{ObjectMeta:{test-deployment-d4dfddfbf deployment-7363 3740f505-b95d-4b55-992e-587651c5cdea 5294 2 2021-10-19 15:55:32 +0000 UTC map[pod-template-hash:d4dfddfbf test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:3] [{apps/v1 Deployment test-deployment c431f6ab-e450-4787-9281-85cf2dd08c97 0xc005b28827 0xc005b28828}] [] [{kube-controller-manager Update apps/v1 2021-10-19 15:55:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c431f6ab-e450-4787-9281-85cf2dd08c97\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-19 15:55:33 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: d4dfddfbf,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:d4dfddfbf test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc005b288e0 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:2,FullyLabeledReplicas:2,ObservedGeneration:2,ReadyReplicas:2,AvailableReplicas:2,Conditions:[]ReplicaSetCondition{},},} + +Oct 19 15:55:35.295: INFO: pod: "test-deployment-d4dfddfbf-8drdl": +&Pod{ObjectMeta:{test-deployment-d4dfddfbf-8drdl test-deployment-d4dfddfbf- deployment-7363 723a40c2-509e-454a-b4e3-5de14473e805 5293 0 2021-10-19 15:55:33 +0000 UTC map[pod-template-hash:d4dfddfbf test-deployment-static:true] map[cni.projectcalico.org/podIP:100.96.1.20/32 cni.projectcalico.org/podIPs:100.96.1.20/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet test-deployment-d4dfddfbf 3740f505-b95d-4b55-992e-587651c5cdea 0xc005ad9e97 0xc005ad9e98}] [] [{kube-controller-manager Update v1 2021-10-19 15:55:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3740f505-b95d-4b55-992e-587651c5cdea\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-19 15:55:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-19 15:55:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.1.20\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-b6lzt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmhay-ddd.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b6lzt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmhay-ddd-worker-1-z1-67558-zh8gq,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 15:55:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 15:55:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 15:55:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 15:55:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.3.120,PodIP:100.96.1.20,StartTime:2021-10-19 15:55:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-19 15:55:34 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://443ece28bcb5c862ff860b0958037cf8008e711a55f2dcbe6c7b42de86ec2094,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.1.20,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + +Oct 19 15:55:35.295: INFO: pod: "test-deployment-d4dfddfbf-8grd6": +&Pod{ObjectMeta:{test-deployment-d4dfddfbf-8grd6 test-deployment-d4dfddfbf- deployment-7363 98afce61-5feb-4324-99c6-d325d9efcb17 5254 0 2021-10-19 15:55:32 +0000 UTC map[pod-template-hash:d4dfddfbf test-deployment-static:true] map[cni.projectcalico.org/podIP:100.96.0.19/32 cni.projectcalico.org/podIPs:100.96.0.19/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet test-deployment-d4dfddfbf 3740f505-b95d-4b55-992e-587651c5cdea 0xc005b600e7 0xc005b600e8}] [] [{kube-controller-manager Update v1 2021-10-19 15:55:32 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3740f505-b95d-4b55-992e-587651c5cdea\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-19 15:55:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-19 15:55:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.0.19\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-7qhph,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmhay-ddd.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7qhph,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 15:55:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 15:55:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 15:55:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 15:55:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.1.123,PodIP:100.96.0.19,StartTime:2021-10-19 15:55:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-19 15:55:33 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://63c517fa8bffc112a943ca0e02f5532b1451e25f58aec5b6a847e47b1bd1e303,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.0.19,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 15:55:35.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-7363" for this suite. +•{"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":346,"completed":8,"skipped":189,"failed":0} +SSSSSS +------------------------------ +[sig-network] Proxy version v1 + A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] version v1 + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 15:55:35.302: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename proxy +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in proxy-4069 +STEP: Waiting for a default service account to be provisioned in namespace +[It] A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 19 15:55:35.432: INFO: Creating pod... +Oct 19 15:55:35.442: INFO: Pod Quantity: 1 Status: Pending +Oct 19 15:55:36.446: INFO: Pod Quantity: 1 Status: Pending +Oct 19 15:55:37.445: INFO: Pod Status: Running +Oct 19 15:55:37.445: INFO: Creating service... +Oct 19 15:55:37.451: INFO: Starting http.Client for https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com/api/v1/namespaces/proxy-4069/pods/agnhost/proxy/some/path/with/DELETE +Oct 19 15:55:37.546: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE +Oct 19 15:55:37.546: INFO: Starting http.Client for https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com/api/v1/namespaces/proxy-4069/pods/agnhost/proxy/some/path/with/GET +Oct 19 15:55:37.552: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET +Oct 19 15:55:37.552: INFO: Starting http.Client for https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com/api/v1/namespaces/proxy-4069/pods/agnhost/proxy/some/path/with/HEAD +Oct 19 15:55:37.559: INFO: http.Client request:HEAD | StatusCode:200 +Oct 19 15:55:37.559: INFO: Starting http.Client for https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com/api/v1/namespaces/proxy-4069/pods/agnhost/proxy/some/path/with/OPTIONS +Oct 19 15:55:37.602: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS +Oct 19 15:55:37.602: INFO: Starting http.Client for https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com/api/v1/namespaces/proxy-4069/pods/agnhost/proxy/some/path/with/PATCH +Oct 19 15:55:37.606: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH +Oct 19 15:55:37.606: INFO: Starting http.Client for https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com/api/v1/namespaces/proxy-4069/pods/agnhost/proxy/some/path/with/POST +Oct 19 15:55:37.610: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST +Oct 19 15:55:37.610: INFO: Starting http.Client for https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com/api/v1/namespaces/proxy-4069/pods/agnhost/proxy/some/path/with/PUT +Oct 19 15:55:37.617: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT +Oct 19 15:55:37.618: INFO: Starting http.Client for https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com/api/v1/namespaces/proxy-4069/services/test-service/proxy/some/path/with/DELETE +Oct 19 15:55:37.622: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE +Oct 19 15:55:37.622: INFO: Starting http.Client for https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com/api/v1/namespaces/proxy-4069/services/test-service/proxy/some/path/with/GET +Oct 19 15:55:37.627: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET +Oct 19 15:55:37.627: INFO: Starting http.Client for https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com/api/v1/namespaces/proxy-4069/services/test-service/proxy/some/path/with/HEAD +Oct 19 15:55:37.632: INFO: http.Client request:HEAD | StatusCode:200 +Oct 19 15:55:37.632: INFO: Starting http.Client for https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com/api/v1/namespaces/proxy-4069/services/test-service/proxy/some/path/with/OPTIONS +Oct 19 15:55:37.636: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS +Oct 19 15:55:37.636: INFO: Starting http.Client for https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com/api/v1/namespaces/proxy-4069/services/test-service/proxy/some/path/with/PATCH +Oct 19 15:55:37.641: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH +Oct 19 15:55:37.641: INFO: Starting http.Client for https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com/api/v1/namespaces/proxy-4069/services/test-service/proxy/some/path/with/POST +Oct 19 15:55:37.645: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST +Oct 19 15:55:37.645: INFO: Starting http.Client for https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com/api/v1/namespaces/proxy-4069/services/test-service/proxy/some/path/with/PUT +Oct 19 15:55:37.650: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT +[AfterEach] version v1 + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 15:55:37.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "proxy-4069" for this suite. +•{"msg":"PASSED [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]","total":346,"completed":9,"skipped":195,"failed":0} +SSSS +------------------------------ +[sig-api-machinery] Discovery + should validate PreferredVersion for each APIGroup [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Discovery + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 15:55:37.657: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename discovery +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in discovery-5419 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] Discovery + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/discovery.go:39 +STEP: Setting up server cert +[It] should validate PreferredVersion for each APIGroup [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 19 15:55:38.071: INFO: Checking APIGroup: apiregistration.k8s.io +Oct 19 15:55:38.073: INFO: PreferredVersion.GroupVersion: apiregistration.k8s.io/v1 +Oct 19 15:55:38.073: INFO: Versions found [{apiregistration.k8s.io/v1 v1}] +Oct 19 15:55:38.073: INFO: apiregistration.k8s.io/v1 matches apiregistration.k8s.io/v1 +Oct 19 15:55:38.073: INFO: Checking APIGroup: apps +Oct 19 15:55:38.074: INFO: PreferredVersion.GroupVersion: apps/v1 +Oct 19 15:55:38.074: INFO: Versions found [{apps/v1 v1}] +Oct 19 15:55:38.074: INFO: apps/v1 matches apps/v1 +Oct 19 15:55:38.074: INFO: Checking APIGroup: events.k8s.io +Oct 19 15:55:38.076: INFO: PreferredVersion.GroupVersion: events.k8s.io/v1 +Oct 19 15:55:38.076: INFO: Versions found [{events.k8s.io/v1 v1} {events.k8s.io/v1beta1 v1beta1}] +Oct 19 15:55:38.076: INFO: events.k8s.io/v1 matches events.k8s.io/v1 +Oct 19 15:55:38.076: INFO: Checking APIGroup: authentication.k8s.io +Oct 19 15:55:38.077: INFO: PreferredVersion.GroupVersion: authentication.k8s.io/v1 +Oct 19 15:55:38.077: INFO: Versions found [{authentication.k8s.io/v1 v1}] +Oct 19 15:55:38.077: INFO: authentication.k8s.io/v1 matches authentication.k8s.io/v1 +Oct 19 15:55:38.077: INFO: Checking APIGroup: authorization.k8s.io +Oct 19 15:55:38.078: INFO: PreferredVersion.GroupVersion: authorization.k8s.io/v1 +Oct 19 15:55:38.078: INFO: Versions found [{authorization.k8s.io/v1 v1}] +Oct 19 15:55:38.078: INFO: authorization.k8s.io/v1 matches authorization.k8s.io/v1 +Oct 19 15:55:38.078: INFO: Checking APIGroup: autoscaling +Oct 19 15:55:38.080: INFO: PreferredVersion.GroupVersion: autoscaling/v1 +Oct 19 15:55:38.080: INFO: Versions found [{autoscaling/v1 v1} {autoscaling/v2beta1 v2beta1} {autoscaling/v2beta2 v2beta2}] +Oct 19 15:55:38.080: INFO: autoscaling/v1 matches autoscaling/v1 +Oct 19 15:55:38.080: INFO: Checking APIGroup: batch +Oct 19 15:55:38.081: INFO: PreferredVersion.GroupVersion: batch/v1 +Oct 19 15:55:38.081: INFO: Versions found [{batch/v1 v1} {batch/v1beta1 v1beta1}] +Oct 19 15:55:38.081: INFO: batch/v1 matches batch/v1 +Oct 19 15:55:38.081: INFO: Checking APIGroup: certificates.k8s.io +Oct 19 15:55:38.082: INFO: PreferredVersion.GroupVersion: certificates.k8s.io/v1 +Oct 19 15:55:38.082: INFO: Versions found [{certificates.k8s.io/v1 v1}] +Oct 19 15:55:38.082: INFO: certificates.k8s.io/v1 matches certificates.k8s.io/v1 +Oct 19 15:55:38.082: INFO: Checking APIGroup: networking.k8s.io +Oct 19 15:55:38.084: INFO: PreferredVersion.GroupVersion: networking.k8s.io/v1 +Oct 19 15:55:38.084: INFO: Versions found [{networking.k8s.io/v1 v1}] +Oct 19 15:55:38.084: INFO: networking.k8s.io/v1 matches networking.k8s.io/v1 +Oct 19 15:55:38.084: INFO: Checking APIGroup: policy +Oct 19 15:55:38.085: INFO: PreferredVersion.GroupVersion: policy/v1 +Oct 19 15:55:38.085: INFO: Versions found [{policy/v1 v1} {policy/v1beta1 v1beta1}] +Oct 19 15:55:38.085: INFO: policy/v1 matches policy/v1 +Oct 19 15:55:38.085: INFO: Checking APIGroup: rbac.authorization.k8s.io +Oct 19 15:55:38.087: INFO: PreferredVersion.GroupVersion: rbac.authorization.k8s.io/v1 +Oct 19 15:55:38.087: INFO: Versions found [{rbac.authorization.k8s.io/v1 v1}] +Oct 19 15:55:38.087: INFO: rbac.authorization.k8s.io/v1 matches rbac.authorization.k8s.io/v1 +Oct 19 15:55:38.087: INFO: Checking APIGroup: storage.k8s.io +Oct 19 15:55:38.088: INFO: PreferredVersion.GroupVersion: storage.k8s.io/v1 +Oct 19 15:55:38.088: INFO: Versions found [{storage.k8s.io/v1 v1} {storage.k8s.io/v1beta1 v1beta1}] +Oct 19 15:55:38.088: INFO: storage.k8s.io/v1 matches storage.k8s.io/v1 +Oct 19 15:55:38.088: INFO: Checking APIGroup: admissionregistration.k8s.io +Oct 19 15:55:38.089: INFO: PreferredVersion.GroupVersion: admissionregistration.k8s.io/v1 +Oct 19 15:55:38.089: INFO: Versions found [{admissionregistration.k8s.io/v1 v1}] +Oct 19 15:55:38.089: INFO: admissionregistration.k8s.io/v1 matches admissionregistration.k8s.io/v1 +Oct 19 15:55:38.089: INFO: Checking APIGroup: apiextensions.k8s.io +Oct 19 15:55:38.091: INFO: PreferredVersion.GroupVersion: apiextensions.k8s.io/v1 +Oct 19 15:55:38.091: INFO: Versions found [{apiextensions.k8s.io/v1 v1}] +Oct 19 15:55:38.091: INFO: apiextensions.k8s.io/v1 matches apiextensions.k8s.io/v1 +Oct 19 15:55:38.091: INFO: Checking APIGroup: scheduling.k8s.io +Oct 19 15:55:38.092: INFO: PreferredVersion.GroupVersion: scheduling.k8s.io/v1 +Oct 19 15:55:38.092: INFO: Versions found [{scheduling.k8s.io/v1 v1}] +Oct 19 15:55:38.092: INFO: scheduling.k8s.io/v1 matches scheduling.k8s.io/v1 +Oct 19 15:55:38.092: INFO: Checking APIGroup: coordination.k8s.io +Oct 19 15:55:38.093: INFO: PreferredVersion.GroupVersion: coordination.k8s.io/v1 +Oct 19 15:55:38.093: INFO: Versions found [{coordination.k8s.io/v1 v1}] +Oct 19 15:55:38.093: INFO: coordination.k8s.io/v1 matches coordination.k8s.io/v1 +Oct 19 15:55:38.093: INFO: Checking APIGroup: node.k8s.io +Oct 19 15:55:38.095: INFO: PreferredVersion.GroupVersion: node.k8s.io/v1 +Oct 19 15:55:38.095: INFO: Versions found [{node.k8s.io/v1 v1} {node.k8s.io/v1beta1 v1beta1}] +Oct 19 15:55:38.095: INFO: node.k8s.io/v1 matches node.k8s.io/v1 +Oct 19 15:55:38.095: INFO: Checking APIGroup: discovery.k8s.io +Oct 19 15:55:38.096: INFO: PreferredVersion.GroupVersion: discovery.k8s.io/v1 +Oct 19 15:55:38.096: INFO: Versions found [{discovery.k8s.io/v1 v1} {discovery.k8s.io/v1beta1 v1beta1}] +Oct 19 15:55:38.096: INFO: discovery.k8s.io/v1 matches discovery.k8s.io/v1 +Oct 19 15:55:38.096: INFO: Checking APIGroup: flowcontrol.apiserver.k8s.io +Oct 19 15:55:38.097: INFO: PreferredVersion.GroupVersion: flowcontrol.apiserver.k8s.io/v1beta1 +Oct 19 15:55:38.097: INFO: Versions found [{flowcontrol.apiserver.k8s.io/v1beta1 v1beta1}] +Oct 19 15:55:38.097: INFO: flowcontrol.apiserver.k8s.io/v1beta1 matches flowcontrol.apiserver.k8s.io/v1beta1 +Oct 19 15:55:38.097: INFO: Checking APIGroup: autoscaling.k8s.io +Oct 19 15:55:38.099: INFO: PreferredVersion.GroupVersion: autoscaling.k8s.io/v1 +Oct 19 15:55:38.099: INFO: Versions found [{autoscaling.k8s.io/v1 v1} {autoscaling.k8s.io/v1beta2 v1beta2}] +Oct 19 15:55:38.099: INFO: autoscaling.k8s.io/v1 matches autoscaling.k8s.io/v1 +Oct 19 15:55:38.099: INFO: Checking APIGroup: crd.projectcalico.org +Oct 19 15:55:38.100: INFO: PreferredVersion.GroupVersion: crd.projectcalico.org/v1 +Oct 19 15:55:38.100: INFO: Versions found [{crd.projectcalico.org/v1 v1}] +Oct 19 15:55:38.100: INFO: crd.projectcalico.org/v1 matches crd.projectcalico.org/v1 +Oct 19 15:55:38.100: INFO: Checking APIGroup: cert.gardener.cloud +Oct 19 15:55:38.101: INFO: PreferredVersion.GroupVersion: cert.gardener.cloud/v1alpha1 +Oct 19 15:55:38.101: INFO: Versions found [{cert.gardener.cloud/v1alpha1 v1alpha1}] +Oct 19 15:55:38.101: INFO: cert.gardener.cloud/v1alpha1 matches cert.gardener.cloud/v1alpha1 +Oct 19 15:55:38.101: INFO: Checking APIGroup: dns.gardener.cloud +Oct 19 15:55:38.103: INFO: PreferredVersion.GroupVersion: dns.gardener.cloud/v1alpha1 +Oct 19 15:55:38.103: INFO: Versions found [{dns.gardener.cloud/v1alpha1 v1alpha1}] +Oct 19 15:55:38.103: INFO: dns.gardener.cloud/v1alpha1 matches dns.gardener.cloud/v1alpha1 +Oct 19 15:55:38.103: INFO: Checking APIGroup: snapshot.storage.k8s.io +Oct 19 15:55:38.104: INFO: PreferredVersion.GroupVersion: snapshot.storage.k8s.io/v1beta1 +Oct 19 15:55:38.104: INFO: Versions found [{snapshot.storage.k8s.io/v1beta1 v1beta1}] +Oct 19 15:55:38.104: INFO: snapshot.storage.k8s.io/v1beta1 matches snapshot.storage.k8s.io/v1beta1 +Oct 19 15:55:38.104: INFO: Checking APIGroup: metrics.k8s.io +Oct 19 15:55:38.105: INFO: PreferredVersion.GroupVersion: metrics.k8s.io/v1beta1 +Oct 19 15:55:38.105: INFO: Versions found [{metrics.k8s.io/v1beta1 v1beta1}] +Oct 19 15:55:38.105: INFO: metrics.k8s.io/v1beta1 matches metrics.k8s.io/v1beta1 +[AfterEach] [sig-api-machinery] Discovery + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 15:55:38.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "discovery-5419" for this suite. +•{"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":346,"completed":10,"skipped":199,"failed":0} +SSSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should verify changes to a daemon set status [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 15:55:38.113: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename daemonsets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in daemonsets-8932 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:142 +[It] should verify changes to a daemon set status [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating simple DaemonSet "daemon-set" +STEP: Check that daemon pods launch on every node of the cluster. +Oct 19 15:55:38.281: INFO: Number of nodes with available pods: 0 +Oct 19 15:55:38.281: INFO: Node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 is running more than one daemon pod +Oct 19 15:55:39.288: INFO: Number of nodes with available pods: 1 +Oct 19 15:55:39.288: INFO: Node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 is running more than one daemon pod +Oct 19 15:55:40.290: INFO: Number of nodes with available pods: 2 +Oct 19 15:55:40.290: INFO: Number of running nodes: 2, number of available pods: 2 +STEP: Getting /status +Oct 19 15:55:40.297: INFO: Daemon Set daemon-set has Conditions: [] +STEP: updating the DaemonSet Status +Oct 19 15:55:40.303: INFO: updatedStatus.Conditions: []v1.DaemonSetCondition{v1.DaemonSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"E2E", Message:"Set from e2e test"}} +STEP: watching for the daemon set status to be updated +Oct 19 15:55:40.305: INFO: Observed &DaemonSet event: ADDED +Oct 19 15:55:40.305: INFO: Observed &DaemonSet event: MODIFIED +Oct 19 15:55:40.306: INFO: Observed &DaemonSet event: MODIFIED +Oct 19 15:55:40.306: INFO: Observed &DaemonSet event: MODIFIED +Oct 19 15:55:40.306: INFO: Found daemon set daemon-set in namespace daemonsets-8932 with labels: map[daemonset-name:daemon-set] annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] +Oct 19 15:55:40.306: INFO: Daemon set daemon-set has an updated status +STEP: patching the DaemonSet Status +STEP: watching for the daemon set status to be patched +Oct 19 15:55:40.312: INFO: Observed &DaemonSet event: ADDED +Oct 19 15:55:40.312: INFO: Observed &DaemonSet event: MODIFIED +Oct 19 15:55:40.312: INFO: Observed &DaemonSet event: MODIFIED +Oct 19 15:55:40.312: INFO: Observed &DaemonSet event: MODIFIED +Oct 19 15:55:40.312: INFO: Observed daemon set daemon-set in namespace daemonsets-8932 with annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] +Oct 19 15:55:40.312: INFO: Observed &DaemonSet event: MODIFIED +Oct 19 15:55:40.312: INFO: Found daemon set daemon-set in namespace daemonsets-8932 with labels: map[daemonset-name:daemon-set] annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusPatched True 0001-01-01 00:00:00 +0000 UTC }] +Oct 19 15:55:40.312: INFO: Daemon set daemon-set has a patched status +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:108 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8932, will wait for the garbage collector to delete the pods +Oct 19 15:55:40.372: INFO: Deleting DaemonSet.extensions daemon-set took: 4.479159ms +Oct 19 15:55:40.472: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.563608ms +Oct 19 15:55:42.875: INFO: Number of nodes with available pods: 0 +Oct 19 15:55:42.875: INFO: Number of running nodes: 0, number of available pods: 0 +Oct 19 15:55:42.877: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"5432"},"items":null} + +Oct 19 15:55:42.879: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"5432"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 15:55:42.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-8932" for this suite. +•{"msg":"PASSED [sig-apps] Daemon set [Serial] should verify changes to a daemon set status [Conformance]","total":346,"completed":11,"skipped":207,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] DNS + should provide DNS for the cluster [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 15:55:42.894: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename dns +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-8794 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide DNS for the cluster [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8794.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8794.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done + +STEP: creating a pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Oct 19 15:55:51.232: INFO: DNS probes using dns-8794/dns-test-382cc2dd-cb9c-4702-b1ea-1bcd28d3f813 succeeded + +STEP: deleting the pod +[AfterEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 15:55:51.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-8794" for this suite. +•{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":346,"completed":12,"skipped":233,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 15:55:51.244: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-1293 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name projected-configmap-test-volume-7f87c1f6-93af-4184-ba4c-c127466f60a7 +STEP: Creating a pod to test consume configMaps +Oct 19 15:55:51.390: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2ed76d70-6deb-411a-9ae6-6ffec6c156f0" in namespace "projected-1293" to be "Succeeded or Failed" +Oct 19 15:55:51.393: INFO: Pod "pod-projected-configmaps-2ed76d70-6deb-411a-9ae6-6ffec6c156f0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.367651ms +Oct 19 15:55:53.397: INFO: Pod "pod-projected-configmaps-2ed76d70-6deb-411a-9ae6-6ffec6c156f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007024933s +STEP: Saw pod success +Oct 19 15:55:53.397: INFO: Pod "pod-projected-configmaps-2ed76d70-6deb-411a-9ae6-6ffec6c156f0" satisfied condition "Succeeded or Failed" +Oct 19 15:55:53.399: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod pod-projected-configmaps-2ed76d70-6deb-411a-9ae6-6ffec6c156f0 container projected-configmap-volume-test: +STEP: delete the pod +Oct 19 15:55:53.414: INFO: Waiting for pod pod-projected-configmaps-2ed76d70-6deb-411a-9ae6-6ffec6c156f0 to disappear +Oct 19 15:55:53.416: INFO: Pod pod-projected-configmaps-2ed76d70-6deb-411a-9ae6-6ffec6c156f0 no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 15:55:53.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-1293" for this suite. +•{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":346,"completed":13,"skipped":245,"failed":0} +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Kubelet when scheduling a busybox command in a pod + should print the output to logs [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 15:55:53.423: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubelet-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubelet-test-5110 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 +[It] should print the output to logs [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 19 15:55:53.562: INFO: The status of Pod busybox-scheduling-b88ba968-b099-4ece-971d-39af4fee650c is Pending, waiting for it to be Running (with Ready = true) +Oct 19 15:55:55.565: INFO: The status of Pod busybox-scheduling-b88ba968-b099-4ece-971d-39af4fee650c is Running (Ready = true) +[AfterEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 15:55:55.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubelet-test-5110" for this suite. +•{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":346,"completed":14,"skipped":267,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 15:55:55.584: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-2940 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0666 on node default medium +Oct 19 15:55:55.722: INFO: Waiting up to 5m0s for pod "pod-e09ead9c-53e9-4cc2-b037-6d205dfb61f7" in namespace "emptydir-2940" to be "Succeeded or Failed" +Oct 19 15:55:55.724: INFO: Pod "pod-e09ead9c-53e9-4cc2-b037-6d205dfb61f7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.159812ms +Oct 19 15:55:57.727: INFO: Pod "pod-e09ead9c-53e9-4cc2-b037-6d205dfb61f7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005287081s +STEP: Saw pod success +Oct 19 15:55:57.727: INFO: Pod "pod-e09ead9c-53e9-4cc2-b037-6d205dfb61f7" satisfied condition "Succeeded or Failed" +Oct 19 15:55:57.729: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod pod-e09ead9c-53e9-4cc2-b037-6d205dfb61f7 container test-container: +STEP: delete the pod +Oct 19 15:55:57.740: INFO: Waiting for pod pod-e09ead9c-53e9-4cc2-b037-6d205dfb61f7 to disappear +Oct 19 15:55:57.742: INFO: Pod pod-e09ead9c-53e9-4cc2-b037-6d205dfb61f7 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 15:55:57.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-2940" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":15,"skipped":280,"failed":0} +SSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should retry creating failed daemon pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 15:55:57.749: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename daemonsets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in daemonsets-5404 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:142 +[It] should retry creating failed daemon pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a simple DaemonSet "daemon-set" +STEP: Check that daemon pods launch on every node of the cluster. +Oct 19 15:55:57.898: INFO: Number of nodes with available pods: 0 +Oct 19 15:55:57.898: INFO: Node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 is running more than one daemon pod +Oct 19 15:55:58.905: INFO: Number of nodes with available pods: 0 +Oct 19 15:55:58.905: INFO: Node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 is running more than one daemon pod +Oct 19 15:55:59.905: INFO: Number of nodes with available pods: 2 +Oct 19 15:55:59.905: INFO: Number of running nodes: 2, number of available pods: 2 +STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. +Oct 19 15:55:59.921: INFO: Number of nodes with available pods: 1 +Oct 19 15:55:59.921: INFO: Node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 is running more than one daemon pod +Oct 19 15:56:00.928: INFO: Number of nodes with available pods: 2 +Oct 19 15:56:00.928: INFO: Number of running nodes: 2, number of available pods: 2 +STEP: Wait for the failed daemon pod to be completely deleted. +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:108 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5404, will wait for the garbage collector to delete the pods +Oct 19 15:56:00.989: INFO: Deleting DaemonSet.extensions daemon-set took: 3.112601ms +Oct 19 15:56:01.089: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.437043ms +Oct 19 15:56:04.092: INFO: Number of nodes with available pods: 0 +Oct 19 15:56:04.092: INFO: Number of running nodes: 0, number of available pods: 0 +Oct 19 15:56:04.095: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"5673"},"items":null} + +Oct 19 15:56:04.097: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"5673"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 15:56:04.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-5404" for this suite. +•{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":346,"completed":16,"skipped":286,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl server-side dry-run + should check if kubectl can dry-run update Pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 15:56:04.111: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-6843 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should check if kubectl can dry-run update Pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 +Oct 19 15:56:04.329: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-6843 run e2e-test-httpd-pod --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 --pod-running-timeout=2m0s --labels=run=e2e-test-httpd-pod' +Oct 19 15:56:04.401: INFO: stderr: "" +Oct 19 15:56:04.401: INFO: stdout: "pod/e2e-test-httpd-pod created\n" +STEP: replace the image in the pod with server-side dry-run +Oct 19 15:56:04.401: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-6843 patch pod e2e-test-httpd-pod -p {"spec":{"containers":[{"name": "e2e-test-httpd-pod","image": "k8s.gcr.io/e2e-test-images/busybox:1.29-1"}]}} --dry-run=server' +Oct 19 15:56:04.536: INFO: stderr: "" +Oct 19 15:56:04.536: INFO: stdout: "pod/e2e-test-httpd-pod patched\n" +STEP: verifying the pod e2e-test-httpd-pod has the right image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 +Oct 19 15:56:04.554: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-6843 delete pods e2e-test-httpd-pod' +Oct 19 15:56:06.953: INFO: stderr: "" +Oct 19 15:56:06.953: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 15:56:06.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-6843" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":346,"completed":17,"skipped":315,"failed":0} +SSS +------------------------------ +[sig-node] Kubelet when scheduling a busybox command that always fails in a pod + should be possible to delete [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 15:56:06.960: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubelet-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubelet-test-3110 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 +[BeforeEach] when scheduling a busybox command that always fails in a pod + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:82 +[It] should be possible to delete [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 15:56:07.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubelet-test-3110" for this suite. +•{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":346,"completed":18,"skipped":318,"failed":0} +SSSSS +------------------------------ +[sig-apps] Deployment + deployment should support rollover [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 15:56:07.110: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename deployment +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in deployment-9142 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 +[It] deployment should support rollover [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 19 15:56:07.275: INFO: Pod name rollover-pod: Found 0 pods out of 1 +Oct 19 15:56:12.566: INFO: Pod name rollover-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running +Oct 19 15:56:12.566: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready +Oct 19 15:56:14.569: INFO: Creating deployment "test-rollover-deployment" +Oct 19 15:56:14.576: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations +Oct 19 15:56:16.581: INFO: Check revision of new replica set for deployment "test-rollover-deployment" +Oct 19 15:56:16.587: INFO: Ensure that both replica sets have 1 created replica +Oct 19 15:56:16.596: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update +Oct 19 15:56:16.602: INFO: Updating deployment test-rollover-deployment +Oct 19 15:56:16.602: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller +Oct 19 15:56:18.609: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 +Oct 19 15:56:18.613: INFO: Make sure deployment "test-rollover-deployment" is complete +Oct 19 15:56:18.618: INFO: all replica sets need to contain the pod-template-hash label +Oct 19 15:56:18.618: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770255774, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770255774, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770255777, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770255774, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 19 15:56:20.625: INFO: all replica sets need to contain the pod-template-hash label +Oct 19 15:56:20.625: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770255774, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770255774, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770255777, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770255774, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 19 15:56:22.624: INFO: all replica sets need to contain the pod-template-hash label +Oct 19 15:56:22.624: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770255774, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770255774, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770255777, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770255774, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 19 15:56:24.626: INFO: all replica sets need to contain the pod-template-hash label +Oct 19 15:56:24.626: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770255774, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770255774, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770255777, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770255774, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 19 15:56:26.627: INFO: all replica sets need to contain the pod-template-hash label +Oct 19 15:56:26.627: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770255774, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770255774, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770255777, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770255774, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 19 15:56:28.624: INFO: +Oct 19 15:56:28.624: INFO: Ensure that both old replica sets have no replicas +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 +Oct 19 15:56:28.631: INFO: Deployment "test-rollover-deployment": +&Deployment{ObjectMeta:{test-rollover-deployment deployment-9142 9947bf35-6e66-4e21-99e7-460159ab01d1 5903 2 2021-10-19 15:56:14 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-10-19 15:56:16 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-19 15:56:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002ab4e08 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-10-19 15:56:14 +0000 UTC,LastTransitionTime:2021-10-19 15:56:14 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-98c5f4599" has successfully progressed.,LastUpdateTime:2021-10-19 15:56:28 +0000 UTC,LastTransitionTime:2021-10-19 15:56:14 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} + +Oct 19 15:56:28.634: INFO: New ReplicaSet "test-rollover-deployment-98c5f4599" of Deployment "test-rollover-deployment": +&ReplicaSet{ObjectMeta:{test-rollover-deployment-98c5f4599 deployment-9142 d7d7da6b-65e9-4df1-80fd-e62937112741 5896 2 2021-10-19 15:56:16 +0000 UTC map[name:rollover-pod pod-template-hash:98c5f4599] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 9947bf35-6e66-4e21-99e7-460159ab01d1 0xc002d2eeb0 0xc002d2eeb1}] [] [{kube-controller-manager Update apps/v1 2021-10-19 15:56:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9947bf35-6e66-4e21-99e7-460159ab01d1\"}":{}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-19 15:56:27 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 98c5f4599,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:98c5f4599] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002d2ef48 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} +Oct 19 15:56:28.634: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": +Oct 19 15:56:28.634: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-9142 d2e46c69-00d3-448e-8ff6-cecb9208e43e 5902 2 2021-10-19 15:56:07 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 9947bf35-6e66-4e21-99e7-460159ab01d1 0xc002d2ec87 0xc002d2ec88}] [] [{e2e.test Update apps/v1 2021-10-19 15:56:07 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-19 15:56:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9947bf35-6e66-4e21-99e7-460159ab01d1\"}":{}}},"f:spec":{"f:replicas":{}}} } {kube-controller-manager Update apps/v1 2021-10-19 15:56:27 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002d2ed48 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Oct 19 15:56:28.634: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-78bc8b888c deployment-9142 4d4f9e8b-5f3a-4b06-9a76-68953d517341 5811 2 2021-10-19 15:56:14 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 9947bf35-6e66-4e21-99e7-460159ab01d1 0xc002d2eda7 0xc002d2eda8}] [] [{kube-controller-manager Update apps/v1 2021-10-19 15:56:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9947bf35-6e66-4e21-99e7-460159ab01d1\"}":{}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-19 15:56:16 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78bc8b888c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002d2ee58 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Oct 19 15:56:28.636: INFO: Pod "test-rollover-deployment-98c5f4599-b4l4x" is available: +&Pod{ObjectMeta:{test-rollover-deployment-98c5f4599-b4l4x test-rollover-deployment-98c5f4599- deployment-9142 0db4a673-2b3f-4578-aef1-7d09fd45e84d 5827 0 2021-10-19 15:56:16 +0000 UTC map[name:rollover-pod pod-template-hash:98c5f4599] map[cni.projectcalico.org/podIP:100.96.0.31/32 cni.projectcalico.org/podIPs:100.96.0.31/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet test-rollover-deployment-98c5f4599 d7d7da6b-65e9-4df1-80fd-e62937112741 0xc002d2f470 0xc002d2f471}] [] [{kube-controller-manager Update v1 2021-10-19 15:56:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d7d7da6b-65e9-4df1-80fd-e62937112741\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-19 15:56:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-19 15:56:17 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.0.31\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-p99bc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmhay-ddd.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p99bc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 15:56:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 15:56:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 15:56:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 15:56:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.1.123,PodIP:100.96.0.31,StartTime:2021-10-19 15:56:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-19 15:56:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:containerd://f10d65616257cf408eee010ea43ced6926d3a26e5179d560cfab855e1fea8d41,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.0.31,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 15:56:28.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-9142" for this suite. +•{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":346,"completed":19,"skipped":323,"failed":0} + +------------------------------ +[sig-apps] ReplicaSet + should validate Replicaset Status endpoints [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 15:56:28.644: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename replicaset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replicaset-3376 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should validate Replicaset Status endpoints [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Create a Replicaset +STEP: Verify that the required pods have come up. +Oct 19 15:56:28.785: INFO: Pod name sample-pod: Found 0 pods out of 1 +Oct 19 15:56:33.789: INFO: Pod name sample-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running +STEP: Getting /status +Oct 19 15:56:33.791: INFO: Replicaset test-rs has Conditions: [] +STEP: updating the Replicaset Status +Oct 19 15:56:33.797: INFO: updatedStatus.Conditions: []v1.ReplicaSetCondition{v1.ReplicaSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"E2E", Message:"Set from e2e test"}} +STEP: watching for the ReplicaSet status to be updated +Oct 19 15:56:33.799: INFO: Observed &ReplicaSet event: ADDED +Oct 19 15:56:33.799: INFO: Observed &ReplicaSet event: MODIFIED +Oct 19 15:56:33.799: INFO: Observed &ReplicaSet event: MODIFIED +Oct 19 15:56:33.799: INFO: Observed &ReplicaSet event: MODIFIED +Oct 19 15:56:33.799: INFO: Found replicaset test-rs in namespace replicaset-3376 with labels: map[name:sample-pod pod:httpd] annotations: map[] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] +Oct 19 15:56:33.799: INFO: Replicaset test-rs has an updated status +STEP: patching the Replicaset Status +Oct 19 15:56:33.799: INFO: Patch payload: {"status":{"conditions":[{"type":"StatusPatched","status":"True"}]}} +Oct 19 15:56:33.806: INFO: Patched status conditions: []v1.ReplicaSetCondition{v1.ReplicaSetCondition{Type:"StatusPatched", Status:"True", LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"", Message:""}} +STEP: watching for the Replicaset status to be patched +Oct 19 15:56:33.808: INFO: Observed &ReplicaSet event: ADDED +Oct 19 15:56:33.808: INFO: Observed &ReplicaSet event: MODIFIED +Oct 19 15:56:33.808: INFO: Observed &ReplicaSet event: MODIFIED +Oct 19 15:56:33.808: INFO: Observed &ReplicaSet event: MODIFIED +Oct 19 15:56:33.808: INFO: Observed replicaset test-rs in namespace replicaset-3376 with annotations: map[] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} +Oct 19 15:56:33.808: INFO: Observed &ReplicaSet event: MODIFIED +Oct 19 15:56:33.808: INFO: Found replicaset test-rs in namespace replicaset-3376 with labels: map[name:sample-pod pod:httpd] annotations: map[] & Conditions: {StatusPatched True 0001-01-01 00:00:00 +0000 UTC } +Oct 19 15:56:33.808: INFO: Replicaset test-rs has a patched status +[AfterEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 15:56:33.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replicaset-3376" for this suite. +•{"msg":"PASSED [sig-apps] ReplicaSet should validate Replicaset Status endpoints [Conformance]","total":346,"completed":20,"skipped":323,"failed":0} +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints + verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 15:56:33.817: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename sched-preemption +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-preemption-1686 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 +Oct 19 15:56:33.952: INFO: Waiting up to 1m0s for all nodes to be ready +Oct 19 15:57:33.987: INFO: Waiting for terminating namespaces to be deleted... +[BeforeEach] PriorityClass endpoints + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 15:57:33.990: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename sched-preemption-path +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-preemption-path-4288 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] PriorityClass endpoints + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:679 +[It] verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 19 15:57:34.132: INFO: PriorityClass.scheduling.k8s.io "p1" is invalid: Value: Forbidden: may not be changed in an update. +Oct 19 15:57:34.134: INFO: PriorityClass.scheduling.k8s.io "p2" is invalid: Value: Forbidden: may not be changed in an update. +[AfterEach] PriorityClass endpoints + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 15:57:34.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-preemption-path-4288" for this suite. +[AfterEach] PriorityClass endpoints + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:693 +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 15:57:34.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-preemption-1686" for this suite. +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 +•{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]","total":346,"completed":21,"skipped":343,"failed":0} +SSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 15:57:34.191: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-3950 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0666 on tmpfs +Oct 19 15:57:34.329: INFO: Waiting up to 5m0s for pod "pod-35c3a28e-ba12-4f87-a66c-8f5783f900b7" in namespace "emptydir-3950" to be "Succeeded or Failed" +Oct 19 15:57:34.332: INFO: Pod "pod-35c3a28e-ba12-4f87-a66c-8f5783f900b7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.23338ms +Oct 19 15:57:36.336: INFO: Pod "pod-35c3a28e-ba12-4f87-a66c-8f5783f900b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006875738s +STEP: Saw pod success +Oct 19 15:57:36.336: INFO: Pod "pod-35c3a28e-ba12-4f87-a66c-8f5783f900b7" satisfied condition "Succeeded or Failed" +Oct 19 15:57:36.338: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod pod-35c3a28e-ba12-4f87-a66c-8f5783f900b7 container test-container: +STEP: delete the pod +Oct 19 15:57:36.390: INFO: Waiting for pod pod-35c3a28e-ba12-4f87-a66c-8f5783f900b7 to disappear +Oct 19 15:57:36.393: INFO: Pod pod-35c3a28e-ba12-4f87-a66c-8f5783f900b7 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 15:57:36.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-3950" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":22,"skipped":350,"failed":0} +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should update labels on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 15:57:36.400: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-3332 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should update labels on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating the pod +Oct 19 15:57:36.542: INFO: The status of Pod labelsupdatec082796c-41f7-40a7-9e9a-ad4c1a2c057c is Pending, waiting for it to be Running (with Ready = true) +Oct 19 15:57:38.545: INFO: The status of Pod labelsupdatec082796c-41f7-40a7-9e9a-ad4c1a2c057c is Running (Ready = true) +Oct 19 15:57:39.063: INFO: Successfully updated pod "labelsupdatec082796c-41f7-40a7-9e9a-ad4c1a2c057c" +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 15:57:43.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-3332" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":346,"completed":23,"skipped":367,"failed":0} +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] HostPort + validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] HostPort + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 15:57:43.094: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename hostport +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in hostport-2236 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] HostPort + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/hostport.go:47 +[It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Trying to create a pod(pod1) with hostport 54323 and hostIP 127.0.0.1 and expect scheduled +Oct 19 15:57:43.245: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) +Oct 19 15:57:45.248: INFO: The status of Pod pod1 is Running (Ready = true) +STEP: Trying to create another pod(pod2) with hostport 54323 but hostIP 10.250.3.120 on the node which pod1 resides and expect scheduled +Oct 19 15:57:45.257: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) +Oct 19 15:57:47.261: INFO: The status of Pod pod2 is Running (Ready = true) +STEP: Trying to create a third pod(pod3) with hostport 54323, hostIP 10.250.3.120 but use UDP protocol on the node which pod2 resides +Oct 19 15:57:47.274: INFO: The status of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) +Oct 19 15:57:49.277: INFO: The status of Pod pod3 is Running (Ready = false) +Oct 19 15:57:51.279: INFO: The status of Pod pod3 is Running (Ready = true) +Oct 19 15:57:51.287: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) +Oct 19 15:57:53.291: INFO: The status of Pod e2e-host-exec is Running (Ready = true) +STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54323 +Oct 19 15:57:53.294: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 10.250.3.120 http://127.0.0.1:54323/hostname] Namespace:hostport-2236 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 19 15:57:53.294: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 19 15:58:13.382: INFO: Can not connect from e2e-host-exec to pod(pod1) to serverIP: 127.0.0.1, port: 54323 +STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54323 +Oct 19 15:58:13.382: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 10.250.3.120 http://127.0.0.1:54323/hostname] Namespace:hostport-2236 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 19 15:58:13.382: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 19 15:58:16.473: INFO: Can not connect from e2e-host-exec to pod(pod1) to serverIP: 127.0.0.1, port: 54323 +STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54323 +Oct 19 15:58:16.473: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 10.250.3.120 http://127.0.0.1:54323/hostname] Namespace:hostport-2236 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 19 15:58:16.473: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: checking connectivity from pod e2e-host-exec to serverIP: 10.250.3.120, port: 54323 +Oct 19 15:58:22.385: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://10.250.3.120:54323/hostname] Namespace:hostport-2236 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 19 15:58:22.385: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: checking connectivity from pod e2e-host-exec to serverIP: 10.250.3.120, port: 54323 UDP +Oct 19 15:58:23.921: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 10.250.3.120 54323] Namespace:hostport-2236 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 19 15:58:23.921: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +[AfterEach] [sig-network] HostPort + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 15:58:29.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "hostport-2236" for this suite. +•{"msg":"PASSED [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]","total":346,"completed":24,"skipped":384,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl expose + should create services for rc [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 15:58:29.104: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-354 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should create services for rc [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating Agnhost RC +Oct 19 15:58:29.253: INFO: namespace kubectl-354 +Oct 19 15:58:29.253: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-354 create -f -' +Oct 19 15:58:29.379: INFO: stderr: "" +Oct 19 15:58:29.379: INFO: stdout: "replicationcontroller/agnhost-primary created\n" +STEP: Waiting for Agnhost primary to start. +Oct 19 15:58:30.383: INFO: Selector matched 1 pods for map[app:agnhost] +Oct 19 15:58:30.383: INFO: Found 1 / 1 +Oct 19 15:58:30.383: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 +Oct 19 15:58:30.386: INFO: Selector matched 1 pods for map[app:agnhost] +Oct 19 15:58:30.386: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +Oct 19 15:58:30.386: INFO: wait on agnhost-primary startup in kubectl-354 +Oct 19 15:58:30.386: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-354 logs agnhost-primary-nl584 agnhost-primary' +Oct 19 15:58:30.465: INFO: stderr: "" +Oct 19 15:58:30.465: INFO: stdout: "Paused\n" +STEP: exposing RC +Oct 19 15:58:30.465: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-354 expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379' +Oct 19 15:58:30.529: INFO: stderr: "" +Oct 19 15:58:30.529: INFO: stdout: "service/rm2 exposed\n" +Oct 19 15:58:30.533: INFO: Service rm2 in namespace kubectl-354 found. +STEP: exposing service +Oct 19 15:58:32.542: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-354 expose service rm2 --name=rm3 --port=2345 --target-port=6379' +Oct 19 15:58:32.602: INFO: stderr: "" +Oct 19 15:58:32.602: INFO: stdout: "service/rm3 exposed\n" +Oct 19 15:58:32.606: INFO: Service rm3 in namespace kubectl-354 found. +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 15:58:34.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-354" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":346,"completed":25,"skipped":397,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 15:58:34.623: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-6876 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name projected-configmap-test-volume-map-18f30ed7-2273-4813-8e14-3d0ced0bca14 +STEP: Creating a pod to test consume configMaps +Oct 19 15:58:34.769: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e130e44f-dd56-40be-9aef-26db65f4e141" in namespace "projected-6876" to be "Succeeded or Failed" +Oct 19 15:58:34.773: INFO: Pod "pod-projected-configmaps-e130e44f-dd56-40be-9aef-26db65f4e141": Phase="Pending", Reason="", readiness=false. Elapsed: 3.390488ms +Oct 19 15:58:36.777: INFO: Pod "pod-projected-configmaps-e130e44f-dd56-40be-9aef-26db65f4e141": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007977936s +STEP: Saw pod success +Oct 19 15:58:36.778: INFO: Pod "pod-projected-configmaps-e130e44f-dd56-40be-9aef-26db65f4e141" satisfied condition "Succeeded or Failed" +Oct 19 15:58:36.782: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod pod-projected-configmaps-e130e44f-dd56-40be-9aef-26db65f4e141 container agnhost-container: +STEP: delete the pod +Oct 19 15:58:36.794: INFO: Waiting for pod pod-projected-configmaps-e130e44f-dd56-40be-9aef-26db65f4e141 to disappear +Oct 19 15:58:36.797: INFO: Pod pod-projected-configmaps-e130e44f-dd56-40be-9aef-26db65f4e141 no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 15:58:36.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-6876" for this suite. +•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":346,"completed":26,"skipped":441,"failed":0} +SSSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should run and stop simple daemon [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 15:58:36.805: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename daemonsets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in daemonsets-1743 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:142 +[It] should run and stop simple daemon [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating simple DaemonSet "daemon-set" +STEP: Check that daemon pods launch on every node of the cluster. +Oct 19 15:58:37.004: INFO: Number of nodes with available pods: 0 +Oct 19 15:58:37.004: INFO: Node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 is running more than one daemon pod +Oct 19 15:58:38.014: INFO: Number of nodes with available pods: 0 +Oct 19 15:58:38.014: INFO: Node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 is running more than one daemon pod +Oct 19 15:58:39.013: INFO: Number of nodes with available pods: 2 +Oct 19 15:58:39.013: INFO: Number of running nodes: 2, number of available pods: 2 +STEP: Stop a daemon pod, check that the daemon pod is revived. +Oct 19 15:58:39.030: INFO: Number of nodes with available pods: 1 +Oct 19 15:58:39.030: INFO: Node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 is running more than one daemon pod +Oct 19 15:58:40.039: INFO: Number of nodes with available pods: 1 +Oct 19 15:58:40.039: INFO: Node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 is running more than one daemon pod +Oct 19 15:58:41.039: INFO: Number of nodes with available pods: 1 +Oct 19 15:58:41.039: INFO: Node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 is running more than one daemon pod +Oct 19 15:58:42.039: INFO: Number of nodes with available pods: 1 +Oct 19 15:58:42.039: INFO: Node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 is running more than one daemon pod +Oct 19 15:58:43.041: INFO: Number of nodes with available pods: 2 +Oct 19 15:58:43.041: INFO: Number of running nodes: 2, number of available pods: 2 +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:108 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1743, will wait for the garbage collector to delete the pods +Oct 19 15:58:43.103: INFO: Deleting DaemonSet.extensions daemon-set took: 5.223419ms +Oct 19 15:58:43.204: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.821617ms +Oct 19 15:58:45.307: INFO: Number of nodes with available pods: 0 +Oct 19 15:58:45.307: INFO: Number of running nodes: 0, number of available pods: 0 +Oct 19 15:58:45.310: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"6849"},"items":null} + +Oct 19 15:58:45.313: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"6849"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 15:58:45.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-1743" for this suite. +•{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":346,"completed":27,"skipped":449,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-network] EndpointSlice + should have Endpoints and EndpointSlices pointing to API Server [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 15:58:45.334: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename endpointslice +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in endpointslice-9051 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 +[It] should have Endpoints and EndpointSlices pointing to API Server [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 15:58:45.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "endpointslice-9051" for this suite. +•{"msg":"PASSED [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","total":346,"completed":28,"skipped":459,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-network] EndpointSliceMirroring + should mirror a custom Endpoints resource through create update and delete [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] EndpointSliceMirroring + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 15:58:45.483: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename endpointslicemirroring +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in endpointslicemirroring-5661 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] EndpointSliceMirroring + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslicemirroring.go:39 +[It] should mirror a custom Endpoints resource through create update and delete [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: mirroring a new custom Endpoint +Oct 19 15:58:45.630: INFO: Waiting for at least 1 EndpointSlice to exist, got 0 +STEP: mirroring an update to a custom Endpoint +STEP: mirroring deletion of a custom Endpoint +[AfterEach] [sig-network] EndpointSliceMirroring + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 15:58:47.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "endpointslicemirroring-5661" for this suite. +•{"msg":"PASSED [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","total":346,"completed":29,"skipped":472,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Variable Expansion + should allow substituting values in a container's command [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 15:58:47.661: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename var-expansion +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in var-expansion-6200 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should allow substituting values in a container's command [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test substitution in container's command +Oct 19 15:58:47.809: INFO: Waiting up to 5m0s for pod "var-expansion-defb94b1-fb16-4cf3-ae75-b9443829e0f4" in namespace "var-expansion-6200" to be "Succeeded or Failed" +Oct 19 15:58:47.812: INFO: Pod "var-expansion-defb94b1-fb16-4cf3-ae75-b9443829e0f4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.150426ms +Oct 19 15:58:49.818: INFO: Pod "var-expansion-defb94b1-fb16-4cf3-ae75-b9443829e0f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00915531s +STEP: Saw pod success +Oct 19 15:58:49.818: INFO: Pod "var-expansion-defb94b1-fb16-4cf3-ae75-b9443829e0f4" satisfied condition "Succeeded or Failed" +Oct 19 15:58:49.821: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod var-expansion-defb94b1-fb16-4cf3-ae75-b9443829e0f4 container dapi-container: +STEP: delete the pod +Oct 19 15:58:49.833: INFO: Waiting for pod var-expansion-defb94b1-fb16-4cf3-ae75-b9443829e0f4 to disappear +Oct 19 15:58:49.836: INFO: Pod var-expansion-defb94b1-fb16-4cf3-ae75-b9443829e0f4 no longer exists +[AfterEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 15:58:49.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-6200" for this suite. +•{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":346,"completed":30,"skipped":531,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-network] Services + should serve multiport endpoints from pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 15:58:49.846: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-8857 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should serve multiport endpoints from pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating service multi-endpoint-test in namespace services-8857 +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8857 to expose endpoints map[] +Oct 19 15:58:49.997: INFO: successfully validated that service multi-endpoint-test in namespace services-8857 exposes endpoints map[] +STEP: Creating pod pod1 in namespace services-8857 +Oct 19 15:58:50.008: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) +Oct 19 15:58:52.013: INFO: The status of Pod pod1 is Running (Ready = true) +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8857 to expose endpoints map[pod1:[100]] +Oct 19 15:58:52.031: INFO: successfully validated that service multi-endpoint-test in namespace services-8857 exposes endpoints map[pod1:[100]] +STEP: Creating pod pod2 in namespace services-8857 +Oct 19 15:58:52.041: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) +Oct 19 15:58:54.046: INFO: The status of Pod pod2 is Running (Ready = true) +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8857 to expose endpoints map[pod1:[100] pod2:[101]] +Oct 19 15:58:54.092: INFO: successfully validated that service multi-endpoint-test in namespace services-8857 exposes endpoints map[pod1:[100] pod2:[101]] +STEP: Checking if the Service forwards traffic to pods +Oct 19 15:58:54.092: INFO: Creating new exec pod +Oct 19 15:58:57.107: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-8857 exec execpodjhnqr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' +Oct 19 15:58:57.360: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" +Oct 19 15:58:57.360: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 19 15:58:57.361: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-8857 exec execpodjhnqr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.67.129.254 80' +Oct 19 15:58:57.654: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 100.67.129.254 80\nConnection to 100.67.129.254 80 port [tcp/http] succeeded!\n" +Oct 19 15:58:57.654: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 19 15:58:57.654: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-8857 exec execpodjhnqr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' +Oct 19 15:58:57.881: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" +Oct 19 15:58:57.881: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 19 15:58:57.881: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-8857 exec execpodjhnqr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.67.129.254 81' +Oct 19 15:58:58.048: INFO: stderr: "+ nc -v -t -w 2 100.67.129.254 81\n+ echo hostName\nConnection to 100.67.129.254 81 port [tcp/*] succeeded!\n" +Oct 19 15:58:58.048: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +STEP: Deleting pod pod1 in namespace services-8857 +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8857 to expose endpoints map[pod2:[101]] +Oct 19 15:58:58.067: INFO: successfully validated that service multi-endpoint-test in namespace services-8857 exposes endpoints map[pod2:[101]] +STEP: Deleting pod pod2 in namespace services-8857 +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8857 to expose endpoints map[] +Oct 19 15:58:58.080: INFO: successfully validated that service multi-endpoint-test in namespace services-8857 exposes endpoints map[] +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 15:58:58.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-8857" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":346,"completed":31,"skipped":542,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Proxy version v1 + should proxy through a service and a pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] version v1 + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 15:58:58.096: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename proxy +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in proxy-7182 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should proxy through a service and a pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: starting an echo server on multiple ports +STEP: creating replication controller proxy-service-bc8nq in namespace proxy-7182 +I1019 15:58:58.240608 4339 runners.go:190] Created replication controller with name: proxy-service-bc8nq, namespace: proxy-7182, replica count: 1 +I1019 15:58:59.292231 4339 runners.go:190] proxy-service-bc8nq Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I1019 15:59:00.293252 4339 runners.go:190] proxy-service-bc8nq Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Oct 19 15:59:00.296: INFO: setup took 2.067352576s, starting test cases +STEP: running 16 cases, 20 attempts per case, 320 total attempts +Oct 19 15:59:00.312: INFO: (0) /api/v1/namespaces/proxy-7182/pods/http:proxy-service-bc8nq-tdjmm:1080/proxy/: ... (200; 15.490326ms) +Oct 19 15:59:00.316: INFO: (0) /api/v1/namespaces/proxy-7182/services/proxy-service-bc8nq:portname1/proxy/: foo (200; 18.949371ms) +Oct 19 15:59:00.320: INFO: (0) /api/v1/namespaces/proxy-7182/services/http:proxy-service-bc8nq:portname2/proxy/: bar (200; 22.918127ms) +Oct 19 15:59:00.320: INFO: (0) /api/v1/namespaces/proxy-7182/pods/http:proxy-service-bc8nq-tdjmm:160/proxy/: foo (200; 23.301858ms) +Oct 19 15:59:00.320: INFO: (0) /api/v1/namespaces/proxy-7182/pods/proxy-service-bc8nq-tdjmm:162/proxy/: bar (200; 23.123836ms) +Oct 19 15:59:00.320: INFO: (0) /api/v1/namespaces/proxy-7182/pods/http:proxy-service-bc8nq-tdjmm:162/proxy/: bar (200; 23.10671ms) +Oct 19 15:59:00.377: INFO: (0) /api/v1/namespaces/proxy-7182/services/https:proxy-service-bc8nq:tlsportname2/proxy/: tls qux (200; 80.356973ms) +Oct 19 15:59:00.377: INFO: (0) /api/v1/namespaces/proxy-7182/pods/https:proxy-service-bc8nq-tdjmm:462/proxy/: tls qux (200; 80.564628ms) +Oct 19 15:59:00.378: INFO: (0) /api/v1/namespaces/proxy-7182/pods/proxy-service-bc8nq-tdjmm:160/proxy/: foo (200; 80.494856ms) +Oct 19 15:59:00.378: INFO: (0) /api/v1/namespaces/proxy-7182/pods/proxy-service-bc8nq-tdjmm/proxy/: test (200; 80.540584ms) +Oct 19 15:59:00.378: INFO: (0) /api/v1/namespaces/proxy-7182/services/proxy-service-bc8nq:portname2/proxy/: bar (200; 80.581051ms) +Oct 19 15:59:00.379: INFO: (0) /api/v1/namespaces/proxy-7182/pods/proxy-service-bc8nq-tdjmm:1080/proxy/: test<... (200; 82.077797ms) +Oct 19 15:59:00.379: INFO: (0) /api/v1/namespaces/proxy-7182/services/http:proxy-service-bc8nq:portname1/proxy/: foo (200; 82.157197ms) +Oct 19 15:59:00.380: INFO: (0) /api/v1/namespaces/proxy-7182/services/https:proxy-service-bc8nq:tlsportname1/proxy/: tls baz (200; 82.871681ms) +Oct 19 15:59:00.380: INFO: (0) /api/v1/namespaces/proxy-7182/pods/https:proxy-service-bc8nq-tdjmm:460/proxy/: tls baz (200; 82.947374ms) +Oct 19 15:59:00.380: INFO: (0) /api/v1/namespaces/proxy-7182/pods/https:proxy-service-bc8nq-tdjmm:443/proxy/: test<... (200; 6.161522ms) +Oct 19 15:59:00.386: INFO: (1) /api/v1/namespaces/proxy-7182/services/http:proxy-service-bc8nq:portname1/proxy/: foo (200; 6.154924ms) +Oct 19 15:59:00.386: INFO: (1) /api/v1/namespaces/proxy-7182/pods/https:proxy-service-bc8nq-tdjmm:460/proxy/: tls baz (200; 6.203034ms) +Oct 19 15:59:00.386: INFO: (1) /api/v1/namespaces/proxy-7182/services/http:proxy-service-bc8nq:portname2/proxy/: bar (200; 6.329226ms) +Oct 19 15:59:00.386: INFO: (1) /api/v1/namespaces/proxy-7182/pods/proxy-service-bc8nq-tdjmm/proxy/: test (200; 6.286102ms) +Oct 19 15:59:00.390: INFO: (1) /api/v1/namespaces/proxy-7182/pods/http:proxy-service-bc8nq-tdjmm:1080/proxy/: ... (200; 9.601257ms) +Oct 19 15:59:00.390: INFO: (1) /api/v1/namespaces/proxy-7182/pods/proxy-service-bc8nq-tdjmm:160/proxy/: foo (200; 9.687843ms) +Oct 19 15:59:00.390: INFO: (1) /api/v1/namespaces/proxy-7182/pods/https:proxy-service-bc8nq-tdjmm:462/proxy/: tls qux (200; 9.653516ms) +Oct 19 15:59:00.390: INFO: (1) /api/v1/namespaces/proxy-7182/pods/https:proxy-service-bc8nq-tdjmm:443/proxy/: test (200; 5.340249ms) +Oct 19 15:59:00.398: INFO: (2) /api/v1/namespaces/proxy-7182/pods/https:proxy-service-bc8nq-tdjmm:460/proxy/: tls baz (200; 5.528991ms) +Oct 19 15:59:00.401: INFO: (2) /api/v1/namespaces/proxy-7182/pods/proxy-service-bc8nq-tdjmm:1080/proxy/: test<... (200; 8.989187ms) +Oct 19 15:59:00.401: INFO: (2) /api/v1/namespaces/proxy-7182/services/http:proxy-service-bc8nq:portname1/proxy/: foo (200; 9.018435ms) +Oct 19 15:59:00.401: INFO: (2) /api/v1/namespaces/proxy-7182/services/https:proxy-service-bc8nq:tlsportname2/proxy/: tls qux (200; 9.060124ms) +Oct 19 15:59:00.401: INFO: (2) /api/v1/namespaces/proxy-7182/pods/http:proxy-service-bc8nq-tdjmm:1080/proxy/: ... (200; 9.012025ms) +Oct 19 15:59:00.401: INFO: (2) /api/v1/namespaces/proxy-7182/pods/proxy-service-bc8nq-tdjmm:162/proxy/: bar (200; 8.986529ms) +Oct 19 15:59:00.401: INFO: (2) /api/v1/namespaces/proxy-7182/pods/https:proxy-service-bc8nq-tdjmm:443/proxy/: test<... (200; 8.611238ms) +Oct 19 15:59:00.413: INFO: (3) /api/v1/namespaces/proxy-7182/pods/proxy-service-bc8nq-tdjmm/proxy/: test (200; 8.933762ms) +Oct 19 15:59:00.415: INFO: (3) /api/v1/namespaces/proxy-7182/pods/http:proxy-service-bc8nq-tdjmm:1080/proxy/: ... (200; 10.800249ms) +Oct 19 15:59:00.416: INFO: (3) /api/v1/namespaces/proxy-7182/services/proxy-service-bc8nq:portname1/proxy/: foo (200; 11.63731ms) +Oct 19 15:59:00.478: INFO: (3) /api/v1/namespaces/proxy-7182/pods/https:proxy-service-bc8nq-tdjmm:462/proxy/: tls qux (200; 73.184885ms) +Oct 19 15:59:00.478: INFO: (3) /api/v1/namespaces/proxy-7182/services/https:proxy-service-bc8nq:tlsportname2/proxy/: tls qux (200; 73.622578ms) +Oct 19 15:59:00.478: INFO: (3) /api/v1/namespaces/proxy-7182/services/http:proxy-service-bc8nq:portname1/proxy/: foo (200; 73.222812ms) +Oct 19 15:59:00.478: INFO: (3) /api/v1/namespaces/proxy-7182/pods/proxy-service-bc8nq-tdjmm:162/proxy/: bar (200; 73.544454ms) +Oct 19 15:59:00.480: INFO: (3) /api/v1/namespaces/proxy-7182/services/proxy-service-bc8nq:portname2/proxy/: bar (200; 75.717271ms) +Oct 19 15:59:00.489: INFO: (4) /api/v1/namespaces/proxy-7182/pods/proxy-service-bc8nq-tdjmm:1080/proxy/: test<... (200; 8.045503ms) +Oct 19 15:59:00.489: INFO: (4) /api/v1/namespaces/proxy-7182/services/proxy-service-bc8nq:portname2/proxy/: bar (200; 8.416782ms) +Oct 19 15:59:00.489: INFO: (4) /api/v1/namespaces/proxy-7182/pods/proxy-service-bc8nq-tdjmm/proxy/: test (200; 8.390971ms) +Oct 19 15:59:00.489: INFO: (4) /api/v1/namespaces/proxy-7182/services/http:proxy-service-bc8nq:portname2/proxy/: bar (200; 8.265411ms) +Oct 19 15:59:00.489: INFO: (4) /api/v1/namespaces/proxy-7182/pods/https:proxy-service-bc8nq-tdjmm:460/proxy/: tls baz (200; 8.678897ms) +Oct 19 15:59:00.489: INFO: (4) /api/v1/namespaces/proxy-7182/pods/https:proxy-service-bc8nq-tdjmm:462/proxy/: tls qux (200; 8.667344ms) +Oct 19 15:59:00.489: INFO: (4) /api/v1/namespaces/proxy-7182/pods/http:proxy-service-bc8nq-tdjmm:1080/proxy/: ... (200; 8.57149ms) +Oct 19 15:59:00.489: INFO: (4) /api/v1/namespaces/proxy-7182/services/https:proxy-service-bc8nq:tlsportname1/proxy/: tls baz (200; 8.662388ms) +Oct 19 15:59:00.489: INFO: (4) /api/v1/namespaces/proxy-7182/pods/https:proxy-service-bc8nq-tdjmm:443/proxy/: test (200; 5.32399ms) +Oct 19 15:59:00.498: INFO: (5) /api/v1/namespaces/proxy-7182/pods/proxy-service-bc8nq-tdjmm:160/proxy/: foo (200; 5.424873ms) +Oct 19 15:59:00.498: INFO: (5) /api/v1/namespaces/proxy-7182/pods/https:proxy-service-bc8nq-tdjmm:460/proxy/: tls baz (200; 5.695451ms) +Oct 19 15:59:00.498: INFO: (5) /api/v1/namespaces/proxy-7182/pods/https:proxy-service-bc8nq-tdjmm:443/proxy/: test<... (200; 6.071407ms) +Oct 19 15:59:00.498: INFO: (5) /api/v1/namespaces/proxy-7182/pods/https:proxy-service-bc8nq-tdjmm:462/proxy/: tls qux (200; 6.188677ms) +Oct 19 15:59:00.500: INFO: (5) /api/v1/namespaces/proxy-7182/services/https:proxy-service-bc8nq:tlsportname1/proxy/: tls baz (200; 7.444075ms) +Oct 19 15:59:00.500: INFO: (5) /api/v1/namespaces/proxy-7182/services/https:proxy-service-bc8nq:tlsportname2/proxy/: tls qux (200; 7.339964ms) +Oct 19 15:59:00.501: INFO: (5) /api/v1/namespaces/proxy-7182/services/proxy-service-bc8nq:portname1/proxy/: foo (200; 9.117709ms) +Oct 19 15:59:00.501: INFO: (5) /api/v1/namespaces/proxy-7182/services/http:proxy-service-bc8nq:portname1/proxy/: foo (200; 9.234256ms) +Oct 19 15:59:00.501: INFO: (5) /api/v1/namespaces/proxy-7182/pods/http:proxy-service-bc8nq-tdjmm:1080/proxy/: ... (200; 9.272478ms) +Oct 19 15:59:00.501: INFO: (5) /api/v1/namespaces/proxy-7182/services/http:proxy-service-bc8nq:portname2/proxy/: bar (200; 9.181122ms) +Oct 19 15:59:00.501: INFO: (5) /api/v1/namespaces/proxy-7182/services/proxy-service-bc8nq:portname2/proxy/: bar (200; 9.345712ms) +Oct 19 15:59:00.501: INFO: (5) /api/v1/namespaces/proxy-7182/pods/http:proxy-service-bc8nq-tdjmm:160/proxy/: foo (200; 9.250507ms) +Oct 19 15:59:00.501: INFO: (5) /api/v1/namespaces/proxy-7182/pods/http:proxy-service-bc8nq-tdjmm:162/proxy/: bar (200; 9.242555ms) +Oct 19 15:59:00.510: INFO: (6) /api/v1/namespaces/proxy-7182/services/https:proxy-service-bc8nq:tlsportname2/proxy/: tls qux (200; 8.062653ms) +Oct 19 15:59:00.510: INFO: (6) /api/v1/namespaces/proxy-7182/pods/proxy-service-bc8nq-tdjmm/proxy/: test (200; 8.524829ms) +Oct 19 15:59:00.510: INFO: (6) /api/v1/namespaces/proxy-7182/services/proxy-service-bc8nq:portname1/proxy/: foo (200; 8.406253ms) +Oct 19 15:59:00.510: INFO: (6) /api/v1/namespaces/proxy-7182/services/http:proxy-service-bc8nq:portname2/proxy/: bar (200; 8.384352ms) +Oct 19 15:59:00.510: INFO: (6) /api/v1/namespaces/proxy-7182/pods/http:proxy-service-bc8nq-tdjmm:1080/proxy/: ... (200; 8.363988ms) +Oct 19 15:59:00.510: INFO: (6) /api/v1/namespaces/proxy-7182/pods/proxy-service-bc8nq-tdjmm:1080/proxy/: test<... (200; 8.192649ms) +Oct 19 15:59:00.510: INFO: (6) /api/v1/namespaces/proxy-7182/pods/proxy-service-bc8nq-tdjmm:160/proxy/: foo (200; 8.111299ms) +Oct 19 15:59:00.510: INFO: (6) /api/v1/namespaces/proxy-7182/services/https:proxy-service-bc8nq:tlsportname1/proxy/: tls baz (200; 8.243451ms) +Oct 19 15:59:00.510: INFO: (6) /api/v1/namespaces/proxy-7182/pods/https:proxy-service-bc8nq-tdjmm:462/proxy/: tls qux (200; 8.296041ms) +Oct 19 15:59:00.510: INFO: (6) /api/v1/namespaces/proxy-7182/pods/https:proxy-service-bc8nq-tdjmm:443/proxy/: ... (200; 8.912778ms) +Oct 19 15:59:00.590: INFO: (7) /api/v1/namespaces/proxy-7182/pods/proxy-service-bc8nq-tdjmm:1080/proxy/: test<... (200; 8.990732ms) +Oct 19 15:59:00.590: INFO: (7) /api/v1/namespaces/proxy-7182/pods/proxy-service-bc8nq-tdjmm/proxy/: test (200; 8.96085ms) +Oct 19 15:59:00.590: INFO: (7) /api/v1/namespaces/proxy-7182/services/https:proxy-service-bc8nq:tlsportname1/proxy/: tls baz (200; 8.902647ms) +Oct 19 15:59:00.590: INFO: (7) /api/v1/namespaces/proxy-7182/pods/http:proxy-service-bc8nq-tdjmm:162/proxy/: bar (200; 9.045016ms) +Oct 19 15:59:00.590: INFO: (7) /api/v1/namespaces/proxy-7182/pods/https:proxy-service-bc8nq-tdjmm:443/proxy/: test (200; 7.772548ms) +Oct 19 15:59:00.601: INFO: (8) /api/v1/namespaces/proxy-7182/pods/http:proxy-service-bc8nq-tdjmm:160/proxy/: foo (200; 7.844388ms) +Oct 19 15:59:00.601: INFO: (8) /api/v1/namespaces/proxy-7182/pods/proxy-service-bc8nq-tdjmm:162/proxy/: bar (200; 7.755089ms) +Oct 19 15:59:00.601: INFO: (8) /api/v1/namespaces/proxy-7182/services/http:proxy-service-bc8nq:portname2/proxy/: bar (200; 8.286565ms) +Oct 19 15:59:00.601: INFO: (8) /api/v1/namespaces/proxy-7182/pods/https:proxy-service-bc8nq-tdjmm:462/proxy/: tls qux (200; 8.102362ms) +Oct 19 15:59:00.601: INFO: (8) /api/v1/namespaces/proxy-7182/services/https:proxy-service-bc8nq:tlsportname2/proxy/: tls qux (200; 7.960072ms) +Oct 19 15:59:00.601: INFO: (8) /api/v1/namespaces/proxy-7182/pods/proxy-service-bc8nq-tdjmm:160/proxy/: foo (200; 7.933276ms) +Oct 19 15:59:00.601: INFO: (8) /api/v1/namespaces/proxy-7182/pods/proxy-service-bc8nq-tdjmm:1080/proxy/: test<... (200; 8.052037ms) +Oct 19 15:59:00.601: INFO: (8) /api/v1/namespaces/proxy-7182/pods/http:proxy-service-bc8nq-tdjmm:1080/proxy/: ... (200; 8.266412ms) +Oct 19 15:59:00.601: INFO: (8) /api/v1/namespaces/proxy-7182/pods/https:proxy-service-bc8nq-tdjmm:443/proxy/: test<... (200; 6.88627ms) +Oct 19 15:59:00.613: INFO: (9) /api/v1/namespaces/proxy-7182/pods/https:proxy-service-bc8nq-tdjmm:460/proxy/: tls baz (200; 8.930843ms) +Oct 19 15:59:00.613: INFO: (9) /api/v1/namespaces/proxy-7182/pods/http:proxy-service-bc8nq-tdjmm:160/proxy/: foo (200; 8.849032ms) +Oct 19 15:59:00.613: INFO: (9) /api/v1/namespaces/proxy-7182/pods/proxy-service-bc8nq-tdjmm:160/proxy/: foo (200; 8.871091ms) +Oct 19 15:59:00.613: INFO: (9) /api/v1/namespaces/proxy-7182/pods/http:proxy-service-bc8nq-tdjmm:162/proxy/: bar (200; 8.8833ms) +Oct 19 15:59:00.613: INFO: (9) /api/v1/namespaces/proxy-7182/pods/https:proxy-service-bc8nq-tdjmm:443/proxy/: ... (200; 9.684089ms) +Oct 19 15:59:00.614: INFO: (9) /api/v1/namespaces/proxy-7182/pods/proxy-service-bc8nq-tdjmm/proxy/: test (200; 9.615755ms) +Oct 19 15:59:00.616: INFO: (9) /api/v1/namespaces/proxy-7182/services/http:proxy-service-bc8nq:portname2/proxy/: bar (200; 11.803537ms) +Oct 19 15:59:00.681: INFO: (10) /api/v1/namespaces/proxy-7182/services/https:proxy-service-bc8nq:tlsportname2/proxy/: tls qux (200; 64.442116ms) +Oct 19 15:59:00.681: INFO: (10) /api/v1/namespaces/proxy-7182/pods/https:proxy-service-bc8nq-tdjmm:443/proxy/: test (200; 64.406532ms) +Oct 19 15:59:00.681: INFO: (10) /api/v1/namespaces/proxy-7182/pods/proxy-service-bc8nq-tdjmm:1080/proxy/: test<... (200; 64.132681ms) +Oct 19 15:59:00.681: INFO: (10) /api/v1/namespaces/proxy-7182/pods/http:proxy-service-bc8nq-tdjmm:1080/proxy/: ... (200; 64.110313ms) +Oct 19 15:59:00.681: INFO: (10) /api/v1/namespaces/proxy-7182/pods/https:proxy-service-bc8nq-tdjmm:462/proxy/: tls qux (200; 64.217495ms) +Oct 19 15:59:00.681: INFO: (10) /api/v1/namespaces/proxy-7182/pods/https:proxy-service-bc8nq-tdjmm:460/proxy/: tls baz (200; 64.738598ms) +Oct 19 15:59:00.681: INFO: (10) /api/v1/namespaces/proxy-7182/services/https:proxy-service-bc8nq:tlsportname1/proxy/: tls baz (200; 64.324676ms) +Oct 19 15:59:00.681: INFO: (10) /api/v1/namespaces/proxy-7182/services/proxy-service-bc8nq:portname1/proxy/: foo (200; 64.48877ms) +Oct 19 15:59:00.682: INFO: (10) /api/v1/namespaces/proxy-7182/pods/proxy-service-bc8nq-tdjmm:162/proxy/: bar (200; 66.069088ms) +Oct 19 15:59:00.684: INFO: (10) /api/v1/namespaces/proxy-7182/pods/proxy-service-bc8nq-tdjmm:160/proxy/: foo (200; 67.595393ms) +Oct 19 15:59:00.684: INFO: (10) /api/v1/namespaces/proxy-7182/pods/http:proxy-service-bc8nq-tdjmm:162/proxy/: bar (200; 67.450233ms) +Oct 19 15:59:00.684: INFO: (10) /api/v1/namespaces/proxy-7182/services/proxy-service-bc8nq:portname2/proxy/: bar (200; 67.667852ms) +Oct 19 15:59:00.685: INFO: (10) /api/v1/namespaces/proxy-7182/pods/http:proxy-service-bc8nq-tdjmm:160/proxy/: foo (200; 68.947253ms) +Oct 19 15:59:00.691: INFO: (11) /api/v1/namespaces/proxy-7182/pods/http:proxy-service-bc8nq-tdjmm:160/proxy/: foo (200; 5.839736ms) +Oct 19 15:59:00.693: INFO: (11) /api/v1/namespaces/proxy-7182/services/proxy-service-bc8nq:portname2/proxy/: bar (200; 7.227833ms) +Oct 19 15:59:00.693: INFO: (11) /api/v1/namespaces/proxy-7182/services/https:proxy-service-bc8nq:tlsportname1/proxy/: tls baz (200; 6.906292ms) +Oct 19 15:59:00.693: INFO: (11) /api/v1/namespaces/proxy-7182/services/https:proxy-service-bc8nq:tlsportname2/proxy/: tls qux (200; 6.890659ms) +Oct 19 15:59:00.693: INFO: (11) /api/v1/namespaces/proxy-7182/pods/http:proxy-service-bc8nq-tdjmm:1080/proxy/: ... (200; 7.060783ms) +Oct 19 15:59:00.693: INFO: (11) /api/v1/namespaces/proxy-7182/services/proxy-service-bc8nq:portname1/proxy/: foo (200; 7.137566ms) +Oct 19 15:59:00.693: INFO: (11) /api/v1/namespaces/proxy-7182/pods/https:proxy-service-bc8nq-tdjmm:462/proxy/: tls qux (200; 6.976675ms) +Oct 19 15:59:00.693: INFO: (11) /api/v1/namespaces/proxy-7182/pods/https:proxy-service-bc8nq-tdjmm:443/proxy/: test (200; 10.293804ms) +Oct 19 15:59:00.696: INFO: (11) /api/v1/namespaces/proxy-7182/services/http:proxy-service-bc8nq:portname1/proxy/: foo (200; 10.362521ms) +Oct 19 15:59:00.696: INFO: (11) /api/v1/namespaces/proxy-7182/pods/http:proxy-service-bc8nq-tdjmm:162/proxy/: bar (200; 10.071998ms) +Oct 19 15:59:00.696: INFO: (11) /api/v1/namespaces/proxy-7182/pods/proxy-service-bc8nq-tdjmm:1080/proxy/: test<... (200; 10.014955ms) +Oct 19 15:59:00.739: INFO: (11) /api/v1/namespaces/proxy-7182/pods/proxy-service-bc8nq-tdjmm:162/proxy/: bar (200; 53.25772ms) +Oct 19 15:59:00.747: INFO: (12) /api/v1/namespaces/proxy-7182/pods/http:proxy-service-bc8nq-tdjmm:160/proxy/: foo (200; 8.113223ms) +Oct 19 15:59:00.747: INFO: (12) /api/v1/namespaces/proxy-7182/services/https:proxy-service-bc8nq:tlsportname2/proxy/: tls qux (200; 8.17542ms) +Oct 19 15:59:00.747: INFO: (12) /api/v1/namespaces/proxy-7182/pods/proxy-service-bc8nq-tdjmm:160/proxy/: foo (200; 8.165148ms) +Oct 19 15:59:00.747: INFO: (12) /api/v1/namespaces/proxy-7182/pods/https:proxy-service-bc8nq-tdjmm:462/proxy/: tls qux (200; 8.095639ms) +Oct 19 15:59:00.747: INFO: (12) /api/v1/namespaces/proxy-7182/pods/https:proxy-service-bc8nq-tdjmm:460/proxy/: tls baz (200; 8.103536ms) +Oct 19 15:59:00.747: INFO: (12) /api/v1/namespaces/proxy-7182/services/https:proxy-service-bc8nq:tlsportname1/proxy/: tls baz (200; 8.147662ms) +Oct 19 15:59:00.747: INFO: (12) /api/v1/namespaces/proxy-7182/pods/proxy-service-bc8nq-tdjmm/proxy/: test (200; 8.192434ms) +Oct 19 15:59:00.747: INFO: (12) /api/v1/namespaces/proxy-7182/pods/proxy-service-bc8nq-tdjmm:162/proxy/: bar (200; 8.095651ms) +Oct 19 15:59:00.747: INFO: (12) /api/v1/namespaces/proxy-7182/pods/https:proxy-service-bc8nq-tdjmm:443/proxy/: test<... (200; 8.256847ms) +Oct 19 15:59:00.749: INFO: (12) /api/v1/namespaces/proxy-7182/pods/http:proxy-service-bc8nq-tdjmm:1080/proxy/: ... (200; 10.128553ms) +Oct 19 15:59:00.750: INFO: (12) /api/v1/namespaces/proxy-7182/services/http:proxy-service-bc8nq:portname1/proxy/: foo (200; 10.899178ms) +Oct 19 15:59:00.750: INFO: (12) /api/v1/namespaces/proxy-7182/services/proxy-service-bc8nq:portname1/proxy/: foo (200; 11.034886ms) +Oct 19 15:59:00.750: INFO: (12) /api/v1/namespaces/proxy-7182/services/proxy-service-bc8nq:portname2/proxy/: bar (200; 11.478128ms) +Oct 19 15:59:00.751: INFO: (12) /api/v1/namespaces/proxy-7182/services/http:proxy-service-bc8nq:portname2/proxy/: bar (200; 11.724722ms) +Oct 19 15:59:00.756: INFO: (13) /api/v1/namespaces/proxy-7182/pods/proxy-service-bc8nq-tdjmm:160/proxy/: foo (200; 5.531581ms) +Oct 19 15:59:00.759: INFO: (13) /api/v1/namespaces/proxy-7182/pods/http:proxy-service-bc8nq-tdjmm:1080/proxy/: ... (200; 8.172976ms) +Oct 19 15:59:00.759: INFO: (13) /api/v1/namespaces/proxy-7182/pods/https:proxy-service-bc8nq-tdjmm:462/proxy/: tls qux (200; 8.12735ms) +Oct 19 15:59:00.759: INFO: (13) /api/v1/namespaces/proxy-7182/pods/https:proxy-service-bc8nq-tdjmm:443/proxy/: test<... (200; 29.38589ms) +Oct 19 15:59:00.780: INFO: (13) /api/v1/namespaces/proxy-7182/pods/proxy-service-bc8nq-tdjmm/proxy/: test (200; 29.307636ms) +Oct 19 15:59:00.780: INFO: (13) /api/v1/namespaces/proxy-7182/pods/http:proxy-service-bc8nq-tdjmm:162/proxy/: bar (200; 29.242828ms) +Oct 19 15:59:00.780: INFO: (13) /api/v1/namespaces/proxy-7182/pods/proxy-service-bc8nq-tdjmm:162/proxy/: bar (200; 29.262471ms) +Oct 19 15:59:00.783: INFO: (13) /api/v1/namespaces/proxy-7182/pods/http:proxy-service-bc8nq-tdjmm:160/proxy/: foo (200; 32.053579ms) +Oct 19 15:59:00.783: INFO: (13) /api/v1/namespaces/proxy-7182/services/proxy-service-bc8nq:portname2/proxy/: bar (200; 32.035667ms) +Oct 19 15:59:00.783: INFO: (13) /api/v1/namespaces/proxy-7182/services/http:proxy-service-bc8nq:portname2/proxy/: bar (200; 32.058965ms) +Oct 19 15:59:00.788: INFO: (14) /api/v1/namespaces/proxy-7182/pods/proxy-service-bc8nq-tdjmm:160/proxy/: foo (200; 5.538338ms) +Oct 19 15:59:00.788: INFO: (14) /api/v1/namespaces/proxy-7182/pods/https:proxy-service-bc8nq-tdjmm:462/proxy/: tls qux (200; 5.463426ms) +Oct 19 15:59:00.788: INFO: (14) /api/v1/namespaces/proxy-7182/pods/proxy-service-bc8nq-tdjmm:1080/proxy/: test<... (200; 5.535977ms) +Oct 19 15:59:00.788: INFO: (14) /api/v1/namespaces/proxy-7182/pods/proxy-service-bc8nq-tdjmm/proxy/: test (200; 5.61751ms) +Oct 19 15:59:00.789: INFO: (14) /api/v1/namespaces/proxy-7182/pods/https:proxy-service-bc8nq-tdjmm:443/proxy/: ... (200; 11.462817ms) +Oct 19 15:59:00.794: INFO: (14) /api/v1/namespaces/proxy-7182/pods/http:proxy-service-bc8nq-tdjmm:162/proxy/: bar (200; 11.396269ms) +Oct 19 15:59:00.795: INFO: (14) /api/v1/namespaces/proxy-7182/services/proxy-service-bc8nq:portname1/proxy/: foo (200; 12.419678ms) +Oct 19 15:59:00.796: INFO: (14) /api/v1/namespaces/proxy-7182/services/http:proxy-service-bc8nq:portname1/proxy/: foo (200; 13.574906ms) +Oct 19 15:59:00.796: INFO: (14) /api/v1/namespaces/proxy-7182/pods/proxy-service-bc8nq-tdjmm:162/proxy/: bar (200; 13.489096ms) +Oct 19 15:59:00.796: INFO: (14) /api/v1/namespaces/proxy-7182/services/proxy-service-bc8nq:portname2/proxy/: bar (200; 13.54577ms) +Oct 19 15:59:00.802: INFO: (15) /api/v1/namespaces/proxy-7182/pods/http:proxy-service-bc8nq-tdjmm:1080/proxy/: ... (200; 5.41155ms) +Oct 19 15:59:00.802: INFO: (15) /api/v1/namespaces/proxy-7182/pods/https:proxy-service-bc8nq-tdjmm:462/proxy/: tls qux (200; 5.469529ms) +Oct 19 15:59:00.803: INFO: (15) /api/v1/namespaces/proxy-7182/pods/https:proxy-service-bc8nq-tdjmm:460/proxy/: tls baz (200; 6.303501ms) +Oct 19 15:59:00.803: INFO: (15) /api/v1/namespaces/proxy-7182/services/https:proxy-service-bc8nq:tlsportname1/proxy/: tls baz (200; 6.267073ms) +Oct 19 15:59:00.803: INFO: (15) /api/v1/namespaces/proxy-7182/pods/https:proxy-service-bc8nq-tdjmm:443/proxy/: test<... (200; 9.026405ms) +Oct 19 15:59:00.806: INFO: (15) /api/v1/namespaces/proxy-7182/pods/proxy-service-bc8nq-tdjmm:162/proxy/: bar (200; 8.993855ms) +Oct 19 15:59:00.806: INFO: (15) /api/v1/namespaces/proxy-7182/pods/proxy-service-bc8nq-tdjmm/proxy/: test (200; 9.615896ms) +Oct 19 15:59:00.808: INFO: (15) /api/v1/namespaces/proxy-7182/services/http:proxy-service-bc8nq:portname2/proxy/: bar (200; 11.778778ms) +Oct 19 15:59:00.808: INFO: (15) /api/v1/namespaces/proxy-7182/services/proxy-service-bc8nq:portname2/proxy/: bar (200; 11.787346ms) +Oct 19 15:59:00.808: INFO: (15) /api/v1/namespaces/proxy-7182/services/proxy-service-bc8nq:portname1/proxy/: foo (200; 11.819787ms) +Oct 19 15:59:00.808: INFO: (15) /api/v1/namespaces/proxy-7182/services/http:proxy-service-bc8nq:portname1/proxy/: foo (200; 11.807269ms) +Oct 19 15:59:00.814: INFO: (16) /api/v1/namespaces/proxy-7182/pods/proxy-service-bc8nq-tdjmm:160/proxy/: foo (200; 6.093132ms) +Oct 19 15:59:00.814: INFO: (16) /api/v1/namespaces/proxy-7182/pods/http:proxy-service-bc8nq-tdjmm:162/proxy/: bar (200; 5.955368ms) +Oct 19 15:59:00.815: INFO: (16) /api/v1/namespaces/proxy-7182/pods/http:proxy-service-bc8nq-tdjmm:160/proxy/: foo (200; 6.062825ms) +Oct 19 15:59:00.815: INFO: (16) /api/v1/namespaces/proxy-7182/pods/http:proxy-service-bc8nq-tdjmm:1080/proxy/: ... (200; 6.03363ms) +Oct 19 15:59:00.815: INFO: (16) /api/v1/namespaces/proxy-7182/pods/proxy-service-bc8nq-tdjmm:1080/proxy/: test<... (200; 6.264831ms) +Oct 19 15:59:00.815: INFO: (16) /api/v1/namespaces/proxy-7182/pods/https:proxy-service-bc8nq-tdjmm:443/proxy/: test (200; 71.005224ms) +Oct 19 15:59:00.880: INFO: (16) /api/v1/namespaces/proxy-7182/pods/proxy-service-bc8nq-tdjmm:162/proxy/: bar (200; 71.012351ms) +Oct 19 15:59:00.881: INFO: (16) /api/v1/namespaces/proxy-7182/services/http:proxy-service-bc8nq:portname2/proxy/: bar (200; 72.804722ms) +Oct 19 15:59:00.881: INFO: (16) /api/v1/namespaces/proxy-7182/services/proxy-service-bc8nq:portname1/proxy/: foo (200; 72.845549ms) +Oct 19 15:59:00.881: INFO: (16) /api/v1/namespaces/proxy-7182/services/http:proxy-service-bc8nq:portname1/proxy/: foo (200; 72.907438ms) +Oct 19 15:59:00.882: INFO: (16) /api/v1/namespaces/proxy-7182/services/proxy-service-bc8nq:portname2/proxy/: bar (200; 73.787306ms) +Oct 19 15:59:00.892: INFO: (17) /api/v1/namespaces/proxy-7182/services/http:proxy-service-bc8nq:portname2/proxy/: bar (200; 9.35604ms) +Oct 19 15:59:00.892: INFO: (17) /api/v1/namespaces/proxy-7182/pods/proxy-service-bc8nq-tdjmm:1080/proxy/: test<... (200; 9.45149ms) +Oct 19 15:59:00.892: INFO: (17) /api/v1/namespaces/proxy-7182/pods/https:proxy-service-bc8nq-tdjmm:443/proxy/: ... (200; 9.516176ms) +Oct 19 15:59:00.892: INFO: (17) /api/v1/namespaces/proxy-7182/services/https:proxy-service-bc8nq:tlsportname2/proxy/: tls qux (200; 9.453718ms) +Oct 19 15:59:00.892: INFO: (17) /api/v1/namespaces/proxy-7182/pods/proxy-service-bc8nq-tdjmm/proxy/: test (200; 9.478682ms) +Oct 19 15:59:00.892: INFO: (17) /api/v1/namespaces/proxy-7182/pods/http:proxy-service-bc8nq-tdjmm:160/proxy/: foo (200; 9.421602ms) +Oct 19 15:59:00.892: INFO: (17) /api/v1/namespaces/proxy-7182/services/proxy-service-bc8nq:portname2/proxy/: bar (200; 9.464569ms) +Oct 19 15:59:00.892: INFO: (17) /api/v1/namespaces/proxy-7182/services/https:proxy-service-bc8nq:tlsportname1/proxy/: tls baz (200; 9.540564ms) +Oct 19 15:59:00.892: INFO: (17) /api/v1/namespaces/proxy-7182/pods/https:proxy-service-bc8nq-tdjmm:462/proxy/: tls qux (200; 9.461024ms) +Oct 19 15:59:00.895: INFO: (17) /api/v1/namespaces/proxy-7182/services/http:proxy-service-bc8nq:portname1/proxy/: foo (200; 12.752586ms) +Oct 19 15:59:00.895: INFO: (17) /api/v1/namespaces/proxy-7182/pods/proxy-service-bc8nq-tdjmm:162/proxy/: bar (200; 12.723848ms) +Oct 19 15:59:00.895: INFO: (17) /api/v1/namespaces/proxy-7182/pods/http:proxy-service-bc8nq-tdjmm:162/proxy/: bar (200; 12.833539ms) +Oct 19 15:59:00.896: INFO: (17) /api/v1/namespaces/proxy-7182/pods/proxy-service-bc8nq-tdjmm:160/proxy/: foo (200; 13.742591ms) +Oct 19 15:59:00.905: INFO: (18) /api/v1/namespaces/proxy-7182/pods/http:proxy-service-bc8nq-tdjmm:1080/proxy/: ... (200; 8.421796ms) +Oct 19 15:59:00.905: INFO: (18) /api/v1/namespaces/proxy-7182/services/http:proxy-service-bc8nq:portname1/proxy/: foo (200; 8.405347ms) +Oct 19 15:59:00.905: INFO: (18) /api/v1/namespaces/proxy-7182/pods/http:proxy-service-bc8nq-tdjmm:162/proxy/: bar (200; 8.427769ms) +Oct 19 15:59:00.905: INFO: (18) /api/v1/namespaces/proxy-7182/pods/https:proxy-service-bc8nq-tdjmm:462/proxy/: tls qux (200; 8.498561ms) +Oct 19 15:59:00.905: INFO: (18) /api/v1/namespaces/proxy-7182/services/proxy-service-bc8nq:portname2/proxy/: bar (200; 8.394172ms) +Oct 19 15:59:00.905: INFO: (18) /api/v1/namespaces/proxy-7182/services/proxy-service-bc8nq:portname1/proxy/: foo (200; 8.418524ms) +Oct 19 15:59:00.905: INFO: (18) /api/v1/namespaces/proxy-7182/pods/https:proxy-service-bc8nq-tdjmm:443/proxy/: test<... (200; 8.475959ms) +Oct 19 15:59:00.905: INFO: (18) /api/v1/namespaces/proxy-7182/pods/proxy-service-bc8nq-tdjmm/proxy/: test (200; 8.595379ms) +Oct 19 15:59:00.905: INFO: (18) /api/v1/namespaces/proxy-7182/pods/https:proxy-service-bc8nq-tdjmm:460/proxy/: tls baz (200; 8.466304ms) +Oct 19 15:59:00.905: INFO: (18) /api/v1/namespaces/proxy-7182/services/https:proxy-service-bc8nq:tlsportname2/proxy/: tls qux (200; 8.539772ms) +Oct 19 15:59:00.907: INFO: (18) /api/v1/namespaces/proxy-7182/services/https:proxy-service-bc8nq:tlsportname1/proxy/: tls baz (200; 10.96337ms) +Oct 19 15:59:00.910: INFO: (18) /api/v1/namespaces/proxy-7182/pods/http:proxy-service-bc8nq-tdjmm:160/proxy/: foo (200; 13.285384ms) +Oct 19 15:59:00.910: INFO: (18) /api/v1/namespaces/proxy-7182/services/http:proxy-service-bc8nq:portname2/proxy/: bar (200; 13.376649ms) +Oct 19 15:59:00.910: INFO: (18) /api/v1/namespaces/proxy-7182/pods/proxy-service-bc8nq-tdjmm:162/proxy/: bar (200; 13.36428ms) +Oct 19 15:59:00.910: INFO: (18) /api/v1/namespaces/proxy-7182/pods/proxy-service-bc8nq-tdjmm:160/proxy/: foo (200; 13.300334ms) +Oct 19 15:59:00.916: INFO: (19) /api/v1/namespaces/proxy-7182/services/https:proxy-service-bc8nq:tlsportname1/proxy/: tls baz (200; 5.556356ms) +Oct 19 15:59:00.916: INFO: (19) /api/v1/namespaces/proxy-7182/pods/http:proxy-service-bc8nq-tdjmm:160/proxy/: foo (200; 5.746178ms) +Oct 19 15:59:00.916: INFO: (19) /api/v1/namespaces/proxy-7182/services/https:proxy-service-bc8nq:tlsportname2/proxy/: tls qux (200; 6.517871ms) +Oct 19 15:59:00.918: INFO: (19) /api/v1/namespaces/proxy-7182/pods/https:proxy-service-bc8nq-tdjmm:460/proxy/: tls baz (200; 7.61656ms) +Oct 19 15:59:00.918: INFO: (19) /api/v1/namespaces/proxy-7182/pods/https:proxy-service-bc8nq-tdjmm:462/proxy/: tls qux (200; 7.930517ms) +Oct 19 15:59:00.918: INFO: (19) /api/v1/namespaces/proxy-7182/pods/proxy-service-bc8nq-tdjmm:1080/proxy/: test<... (200; 8.011667ms) +Oct 19 15:59:00.918: INFO: (19) /api/v1/namespaces/proxy-7182/pods/https:proxy-service-bc8nq-tdjmm:443/proxy/: test (200; 9.201046ms) +Oct 19 15:59:00.919: INFO: (19) /api/v1/namespaces/proxy-7182/services/http:proxy-service-bc8nq:portname2/proxy/: bar (200; 9.146549ms) +Oct 19 15:59:00.919: INFO: (19) /api/v1/namespaces/proxy-7182/pods/http:proxy-service-bc8nq-tdjmm:162/proxy/: bar (200; 9.074144ms) +Oct 19 15:59:00.919: INFO: (19) /api/v1/namespaces/proxy-7182/services/http:proxy-service-bc8nq:portname1/proxy/: foo (200; 8.979269ms) +Oct 19 15:59:00.919: INFO: (19) /api/v1/namespaces/proxy-7182/pods/http:proxy-service-bc8nq-tdjmm:1080/proxy/: ... (200; 9.032505ms) +Oct 19 15:59:00.922: INFO: (19) /api/v1/namespaces/proxy-7182/pods/proxy-service-bc8nq-tdjmm:162/proxy/: bar (200; 11.732913ms) +Oct 19 15:59:00.922: INFO: (19) /api/v1/namespaces/proxy-7182/services/proxy-service-bc8nq:portname2/proxy/: bar (200; 11.9192ms) +Oct 19 15:59:00.922: INFO: (19) /api/v1/namespaces/proxy-7182/services/proxy-service-bc8nq:portname1/proxy/: foo (200; 11.849233ms) +Oct 19 15:59:00.977: INFO: (19) /api/v1/namespaces/proxy-7182/pods/proxy-service-bc8nq-tdjmm:160/proxy/: foo (200; 67.567264ms) +STEP: deleting ReplicationController proxy-service-bc8nq in namespace proxy-7182, will wait for the garbage collector to delete the pods +Oct 19 15:59:01.037: INFO: Deleting ReplicationController proxy-service-bc8nq took: 5.002535ms +Oct 19 15:59:01.137: INFO: Terminating ReplicationController proxy-service-bc8nq pods took: 100.285563ms +[AfterEach] version v1 + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 15:59:02.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "proxy-7182" for this suite. +•{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":346,"completed":32,"skipped":574,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should verify ResourceQuota with best effort scope. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 15:59:02.347: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename resourcequota +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-9623 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should verify ResourceQuota with best effort scope. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a ResourceQuota with best effort scope +STEP: Ensuring ResourceQuota status is calculated +STEP: Creating a ResourceQuota with not best effort scope +STEP: Ensuring ResourceQuota status is calculated +STEP: Creating a best-effort pod +STEP: Ensuring resource quota with best effort scope captures the pod usage +STEP: Ensuring resource quota with not best effort ignored the pod usage +STEP: Deleting the pod +STEP: Ensuring resource quota status released the pod usage +STEP: Creating a not best-effort pod +STEP: Ensuring resource quota with not best effort scope captures the pod usage +STEP: Ensuring resource quota with best effort scope ignored the pod usage +STEP: Deleting the pod +STEP: Ensuring resource quota status released the pod usage +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 15:59:18.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-9623" for this suite. +•{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":346,"completed":33,"skipped":616,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + pod should support shared volumes between containers [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 15:59:18.597: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-9417 +STEP: Waiting for a default service account to be provisioned in namespace +[It] pod should support shared volumes between containers [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating Pod +STEP: Reading file content from the nginx-container +Oct 19 15:59:20.748: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-9417 PodName:pod-sharedvolume-8d9f0169-7037-4a03-be50-bbbd88c74012 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 19 15:59:20.748: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 19 15:59:20.894: INFO: Exec stderr: "" +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 15:59:20.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-9417" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":346,"completed":34,"skipped":631,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Kubelet when scheduling a read only busybox container + should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 15:59:20.903: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubelet-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubelet-test-2112 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 +[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 19 15:59:21.052: INFO: The status of Pod busybox-readonly-fsde5dbd39-fa3c-43e9-a294-308bcb966fce is Pending, waiting for it to be Running (with Ready = true) +Oct 19 15:59:23.056: INFO: The status of Pod busybox-readonly-fsde5dbd39-fa3c-43e9-a294-308bcb966fce is Running (Ready = true) +[AfterEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 15:59:23.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubelet-test-2112" for this suite. +•{"msg":"PASSED [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":35,"skipped":676,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] DNS + should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 15:59:23.076: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename dns +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-2740 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2740.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-2740.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2740.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2740.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-2740.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2740.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done + +STEP: creating a pod to probe /etc/hosts +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Oct 19 15:59:25.364: INFO: DNS probes using dns-2740/dns-test-ccf90120-55ec-45dd-b4dd-7f8ac7d1f464 succeeded + +STEP: deleting the pod +[AfterEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 15:59:25.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-2740" for this suite. +•{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":346,"completed":36,"skipped":707,"failed":0} +SSS +------------------------------ +[sig-node] Pods + should be updated [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 15:59:25.378: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename pods +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-8932 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:188 +[It] should be updated [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pod +STEP: submitting the pod to kubernetes +Oct 19 15:59:25.524: INFO: The status of Pod pod-update-ab1d6b07-0167-4f09-b29c-c4b97dd629c1 is Pending, waiting for it to be Running (with Ready = true) +Oct 19 15:59:27.529: INFO: The status of Pod pod-update-ab1d6b07-0167-4f09-b29c-c4b97dd629c1 is Pending, waiting for it to be Running (with Ready = true) +Oct 19 15:59:29.530: INFO: The status of Pod pod-update-ab1d6b07-0167-4f09-b29c-c4b97dd629c1 is Running (Ready = true) +STEP: verifying the pod is in kubernetes +STEP: updating the pod +Oct 19 15:59:30.049: INFO: Successfully updated pod "pod-update-ab1d6b07-0167-4f09-b29c-c4b97dd629c1" +STEP: verifying the updated pod is in kubernetes +Oct 19 15:59:30.055: INFO: Pod update OK +[AfterEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 15:59:30.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-8932" for this suite. +•{"msg":"PASSED [sig-node] Pods should be updated [NodeConformance] [Conformance]","total":346,"completed":37,"skipped":710,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected combined + should project all components that make up the projection API [Projection][NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected combined + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 15:59:30.064: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-6939 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name configmap-projected-all-test-volume-0ceb57f5-0e90-433f-9f96-0a130b45d37f +STEP: Creating secret with name secret-projected-all-test-volume-7a617929-2fe5-495e-952f-a9e2d15428ee +STEP: Creating a pod to test Check all projections for projected volume plugin +Oct 19 15:59:30.215: INFO: Waiting up to 5m0s for pod "projected-volume-bc68b14a-7591-4c84-8443-e4164ef137fc" in namespace "projected-6939" to be "Succeeded or Failed" +Oct 19 15:59:30.218: INFO: Pod "projected-volume-bc68b14a-7591-4c84-8443-e4164ef137fc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.960201ms +Oct 19 15:59:32.223: INFO: Pod "projected-volume-bc68b14a-7591-4c84-8443-e4164ef137fc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007469223s +STEP: Saw pod success +Oct 19 15:59:32.223: INFO: Pod "projected-volume-bc68b14a-7591-4c84-8443-e4164ef137fc" satisfied condition "Succeeded or Failed" +Oct 19 15:59:32.226: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod projected-volume-bc68b14a-7591-4c84-8443-e4164ef137fc container projected-all-volume-test: +STEP: delete the pod +Oct 19 15:59:32.248: INFO: Waiting for pod projected-volume-bc68b14a-7591-4c84-8443-e4164ef137fc to disappear +Oct 19 15:59:32.251: INFO: Pod projected-volume-bc68b14a-7591-4c84-8443-e4164ef137fc no longer exists +[AfterEach] [sig-storage] Projected combined + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 15:59:32.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-6939" for this suite. +•{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":346,"completed":38,"skipped":736,"failed":0} +SSSSS +------------------------------ +[sig-cli] Kubectl client Update Demo + should scale a replication controller [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 15:59:32.260: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-1002 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[BeforeEach] Update Demo + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:296 +[It] should scale a replication controller [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a replication controller +Oct 19 15:59:32.396: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-1002 create -f -' +Oct 19 15:59:32.535: INFO: stderr: "" +Oct 19 15:59:32.535: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" +STEP: waiting for all containers in name=update-demo pods to come up. +Oct 19 15:59:32.535: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-1002 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 19 15:59:32.585: INFO: stderr: "" +Oct 19 15:59:32.585: INFO: stdout: "update-demo-nautilus-6mc7t update-demo-nautilus-dsrn2 " +Oct 19 15:59:32.585: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-1002 get pods update-demo-nautilus-6mc7t -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 19 15:59:32.640: INFO: stderr: "" +Oct 19 15:59:32.640: INFO: stdout: "" +Oct 19 15:59:32.640: INFO: update-demo-nautilus-6mc7t is created but not running +Oct 19 15:59:37.641: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-1002 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 19 15:59:37.693: INFO: stderr: "" +Oct 19 15:59:37.693: INFO: stdout: "update-demo-nautilus-6mc7t update-demo-nautilus-dsrn2 " +Oct 19 15:59:37.693: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-1002 get pods update-demo-nautilus-6mc7t -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 19 15:59:37.740: INFO: stderr: "" +Oct 19 15:59:37.740: INFO: stdout: "true" +Oct 19 15:59:37.740: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-1002 get pods update-demo-nautilus-6mc7t -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Oct 19 15:59:37.789: INFO: stderr: "" +Oct 19 15:59:37.789: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" +Oct 19 15:59:37.790: INFO: validating pod update-demo-nautilus-6mc7t +Oct 19 15:59:37.847: INFO: got data: { + "image": "nautilus.jpg" +} + +Oct 19 15:59:37.847: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Oct 19 15:59:37.847: INFO: update-demo-nautilus-6mc7t is verified up and running +Oct 19 15:59:37.847: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-1002 get pods update-demo-nautilus-dsrn2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 19 15:59:37.894: INFO: stderr: "" +Oct 19 15:59:37.894: INFO: stdout: "true" +Oct 19 15:59:37.894: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-1002 get pods update-demo-nautilus-dsrn2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Oct 19 15:59:37.937: INFO: stderr: "" +Oct 19 15:59:37.937: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" +Oct 19 15:59:37.937: INFO: validating pod update-demo-nautilus-dsrn2 +Oct 19 15:59:37.996: INFO: got data: { + "image": "nautilus.jpg" +} + +Oct 19 15:59:37.996: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Oct 19 15:59:37.996: INFO: update-demo-nautilus-dsrn2 is verified up and running +STEP: scaling down the replication controller +Oct 19 15:59:37.997: INFO: scanned /root for discovery docs: +Oct 19 15:59:37.997: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-1002 scale rc update-demo-nautilus --replicas=1 --timeout=5m' +Oct 19 15:59:39.072: INFO: stderr: "" +Oct 19 15:59:39.072: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" +STEP: waiting for all containers in name=update-demo pods to come up. +Oct 19 15:59:39.072: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-1002 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 19 15:59:39.122: INFO: stderr: "" +Oct 19 15:59:39.122: INFO: stdout: "update-demo-nautilus-6mc7t update-demo-nautilus-dsrn2 " +STEP: Replicas for name=update-demo: expected=1 actual=2 +Oct 19 15:59:44.124: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-1002 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 19 15:59:44.171: INFO: stderr: "" +Oct 19 15:59:44.171: INFO: stdout: "update-demo-nautilus-dsrn2 " +Oct 19 15:59:44.171: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-1002 get pods update-demo-nautilus-dsrn2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 19 15:59:44.227: INFO: stderr: "" +Oct 19 15:59:44.227: INFO: stdout: "true" +Oct 19 15:59:44.227: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-1002 get pods update-demo-nautilus-dsrn2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Oct 19 15:59:44.271: INFO: stderr: "" +Oct 19 15:59:44.271: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" +Oct 19 15:59:44.271: INFO: validating pod update-demo-nautilus-dsrn2 +Oct 19 15:59:44.277: INFO: got data: { + "image": "nautilus.jpg" +} + +Oct 19 15:59:44.277: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Oct 19 15:59:44.277: INFO: update-demo-nautilus-dsrn2 is verified up and running +STEP: scaling up the replication controller +Oct 19 15:59:44.278: INFO: scanned /root for discovery docs: +Oct 19 15:59:44.278: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-1002 scale rc update-demo-nautilus --replicas=2 --timeout=5m' +Oct 19 15:59:45.349: INFO: stderr: "" +Oct 19 15:59:45.349: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" +STEP: waiting for all containers in name=update-demo pods to come up. +Oct 19 15:59:45.349: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-1002 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 19 15:59:45.402: INFO: stderr: "" +Oct 19 15:59:45.402: INFO: stdout: "update-demo-nautilus-dsrn2 update-demo-nautilus-rs24r " +Oct 19 15:59:45.402: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-1002 get pods update-demo-nautilus-dsrn2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 19 15:59:45.449: INFO: stderr: "" +Oct 19 15:59:45.449: INFO: stdout: "true" +Oct 19 15:59:45.449: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-1002 get pods update-demo-nautilus-dsrn2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Oct 19 15:59:45.498: INFO: stderr: "" +Oct 19 15:59:45.498: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" +Oct 19 15:59:45.498: INFO: validating pod update-demo-nautilus-dsrn2 +Oct 19 15:59:45.547: INFO: got data: { + "image": "nautilus.jpg" +} + +Oct 19 15:59:45.547: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Oct 19 15:59:45.547: INFO: update-demo-nautilus-dsrn2 is verified up and running +Oct 19 15:59:45.547: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-1002 get pods update-demo-nautilus-rs24r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 19 15:59:45.593: INFO: stderr: "" +Oct 19 15:59:45.593: INFO: stdout: "true" +Oct 19 15:59:45.593: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-1002 get pods update-demo-nautilus-rs24r -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Oct 19 15:59:45.639: INFO: stderr: "" +Oct 19 15:59:45.639: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" +Oct 19 15:59:45.639: INFO: validating pod update-demo-nautilus-rs24r +Oct 19 15:59:45.694: INFO: got data: { + "image": "nautilus.jpg" +} + +Oct 19 15:59:45.694: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Oct 19 15:59:45.694: INFO: update-demo-nautilus-rs24r is verified up and running +STEP: using delete to clean up resources +Oct 19 15:59:45.694: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-1002 delete --grace-period=0 --force -f -' +Oct 19 15:59:45.741: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Oct 19 15:59:45.741: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" +Oct 19 15:59:45.741: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-1002 get rc,svc -l name=update-demo --no-headers' +Oct 19 15:59:45.791: INFO: stderr: "No resources found in kubectl-1002 namespace.\n" +Oct 19 15:59:45.791: INFO: stdout: "" +Oct 19 15:59:45.791: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-1002 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' +Oct 19 15:59:45.840: INFO: stderr: "" +Oct 19 15:59:45.840: INFO: stdout: "" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 15:59:45.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-1002" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":346,"completed":39,"skipped":741,"failed":0} +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Watchers + should observe an object deletion if it stops meeting the requirements of the selector [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 15:59:45.850: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename watch +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in watch-898 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a watch on configmaps with a certain label +STEP: creating a new configmap +STEP: modifying the configmap once +STEP: changing the label value of the configmap +STEP: Expecting to observe a delete notification for the watched object +Oct 19 15:59:46.002: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-898 8c9d7191-c0f6-4caa-847c-aa14e8c9114a 7492 0 2021-10-19 15:59:45 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-10-19 15:59:45 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 19 15:59:46.002: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-898 8c9d7191-c0f6-4caa-847c-aa14e8c9114a 7493 0 2021-10-19 15:59:45 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-10-19 15:59:45 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 19 15:59:46.002: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-898 8c9d7191-c0f6-4caa-847c-aa14e8c9114a 7494 0 2021-10-19 15:59:45 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-10-19 15:59:45 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: modifying the configmap a second time +STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements +STEP: changing the label value of the configmap back +STEP: modifying the configmap a third time +STEP: deleting the configmap +STEP: Expecting to observe an add notification for the watched object when the label value was restored +Oct 19 15:59:56.028: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-898 8c9d7191-c0f6-4caa-847c-aa14e8c9114a 7552 0 2021-10-19 15:59:45 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-10-19 15:59:45 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 19 15:59:56.028: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-898 8c9d7191-c0f6-4caa-847c-aa14e8c9114a 7553 0 2021-10-19 15:59:45 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-10-19 15:59:45 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 19 15:59:56.028: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-898 8c9d7191-c0f6-4caa-847c-aa14e8c9114a 7554 0 2021-10-19 15:59:45 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-10-19 15:59:45 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} +[AfterEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 15:59:56.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "watch-898" for this suite. +•{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":346,"completed":40,"skipped":758,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Proxy server + should support --unix-socket=/path [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 15:59:56.037: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-2439 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should support --unix-socket=/path [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Starting the proxy +Oct 19 15:59:56.187: INFO: Asynchronously running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-2439 proxy --unix-socket=/tmp/kubectl-proxy-unix978741801/test' +STEP: retrieving proxy /api/ output +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 15:59:56.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-2439" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":346,"completed":41,"skipped":802,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should unconditionally reject operations on fail closed webhook [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 15:59:56.226: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-4670 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 19 15:59:57.142: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 19 16:00:00.173: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should unconditionally reject operations on fail closed webhook [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API +STEP: create a namespace for the webhook +STEP: create a configmap should be unconditionally rejected by the webhook +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:00:00.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-4670" for this suite. +STEP: Destroying namespace "webhook-4670-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":346,"completed":42,"skipped":813,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-node] Security Context When creating a pod with privileged + should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:00:00.333: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename security-context-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-test-712 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 +[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 19 16:00:00.473: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-3738d763-8268-43ba-b31f-dee9060aa347" in namespace "security-context-test-712" to be "Succeeded or Failed" +Oct 19 16:00:00.475: INFO: Pod "busybox-privileged-false-3738d763-8268-43ba-b31f-dee9060aa347": Phase="Pending", Reason="", readiness=false. Elapsed: 2.865455ms +Oct 19 16:00:02.480: INFO: Pod "busybox-privileged-false-3738d763-8268-43ba-b31f-dee9060aa347": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007822486s +Oct 19 16:00:02.480: INFO: Pod "busybox-privileged-false-3738d763-8268-43ba-b31f-dee9060aa347" satisfied condition "Succeeded or Failed" +Oct 19 16:00:02.487: INFO: Got logs for pod "busybox-privileged-false-3738d763-8268-43ba-b31f-dee9060aa347": "ip: RTNETLINK answers: Operation not permitted\n" +[AfterEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:00:02.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "security-context-test-712" for this suite. +•{"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":43,"skipped":825,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide podname only [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:00:02.497: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-4781 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should provide podname only [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 19 16:00:02.646: INFO: Waiting up to 5m0s for pod "downwardapi-volume-574512f9-81e2-4fed-849b-da726a781215" in namespace "projected-4781" to be "Succeeded or Failed" +Oct 19 16:00:02.649: INFO: Pod "downwardapi-volume-574512f9-81e2-4fed-849b-da726a781215": Phase="Pending", Reason="", readiness=false. Elapsed: 3.135165ms +Oct 19 16:00:04.652: INFO: Pod "downwardapi-volume-574512f9-81e2-4fed-849b-da726a781215": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006820914s +STEP: Saw pod success +Oct 19 16:00:04.652: INFO: Pod "downwardapi-volume-574512f9-81e2-4fed-849b-da726a781215" satisfied condition "Succeeded or Failed" +Oct 19 16:00:04.656: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod downwardapi-volume-574512f9-81e2-4fed-849b-da726a781215 container client-container: +STEP: delete the pod +Oct 19 16:00:04.670: INFO: Waiting for pod downwardapi-volume-574512f9-81e2-4fed-849b-da726a781215 to disappear +Oct 19 16:00:04.673: INFO: Pod downwardapi-volume-574512f9-81e2-4fed-849b-da726a781215 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:00:04.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-4781" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":346,"completed":44,"skipped":852,"failed":0} +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Docker Containers + should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Docker Containers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:00:04.682: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename containers +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in containers-2779 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test override arguments +Oct 19 16:00:04.826: INFO: Waiting up to 5m0s for pod "client-containers-b61b248d-b9c4-42d5-ad3b-a6904f6b3b69" in namespace "containers-2779" to be "Succeeded or Failed" +Oct 19 16:00:04.830: INFO: Pod "client-containers-b61b248d-b9c4-42d5-ad3b-a6904f6b3b69": Phase="Pending", Reason="", readiness=false. Elapsed: 3.997073ms +Oct 19 16:00:06.836: INFO: Pod "client-containers-b61b248d-b9c4-42d5-ad3b-a6904f6b3b69": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009954491s +STEP: Saw pod success +Oct 19 16:00:06.836: INFO: Pod "client-containers-b61b248d-b9c4-42d5-ad3b-a6904f6b3b69" satisfied condition "Succeeded or Failed" +Oct 19 16:00:06.840: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod client-containers-b61b248d-b9c4-42d5-ad3b-a6904f6b3b69 container agnhost-container: +STEP: delete the pod +Oct 19 16:00:06.854: INFO: Waiting for pod client-containers-b61b248d-b9c4-42d5-ad3b-a6904f6b3b69 to disappear +Oct 19 16:00:06.857: INFO: Pod client-containers-b61b248d-b9c4-42d5-ad3b-a6904f6b3b69 no longer exists +[AfterEach] [sig-node] Docker Containers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:00:06.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "containers-2779" for this suite. +•{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":346,"completed":45,"skipped":874,"failed":0} +SSSSS +------------------------------ +[sig-node] Pods + should get a host IP [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:00:06.867: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename pods +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-2648 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:188 +[It] should get a host IP [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating pod +Oct 19 16:00:07.014: INFO: The status of Pod pod-hostip-e636583f-2bb4-43a2-8e11-b0896ea4d6cc is Pending, waiting for it to be Running (with Ready = true) +Oct 19 16:00:09.019: INFO: The status of Pod pod-hostip-e636583f-2bb4-43a2-8e11-b0896ea4d6cc is Running (Ready = true) +Oct 19 16:00:09.025: INFO: Pod pod-hostip-e636583f-2bb4-43a2-8e11-b0896ea4d6cc has hostIP: 10.250.1.123 +[AfterEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:00:09.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-2648" for this suite. +•{"msg":"PASSED [sig-node] Pods should get a host IP [NodeConformance] [Conformance]","total":346,"completed":46,"skipped":879,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:00:09.034: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-4304 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name projected-configmap-test-volume-df193c67-58a1-45e9-aa57-be8f2dea3141 +STEP: Creating a pod to test consume configMaps +Oct 19 16:00:09.183: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0d208d21-7674-4870-b092-c25c10a3ca1e" in namespace "projected-4304" to be "Succeeded or Failed" +Oct 19 16:00:09.186: INFO: Pod "pod-projected-configmaps-0d208d21-7674-4870-b092-c25c10a3ca1e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.319489ms +Oct 19 16:00:11.191: INFO: Pod "pod-projected-configmaps-0d208d21-7674-4870-b092-c25c10a3ca1e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008786094s +STEP: Saw pod success +Oct 19 16:00:11.191: INFO: Pod "pod-projected-configmaps-0d208d21-7674-4870-b092-c25c10a3ca1e" satisfied condition "Succeeded or Failed" +Oct 19 16:00:11.195: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod pod-projected-configmaps-0d208d21-7674-4870-b092-c25c10a3ca1e container agnhost-container: +STEP: delete the pod +Oct 19 16:00:11.208: INFO: Waiting for pod pod-projected-configmaps-0d208d21-7674-4870-b092-c25c10a3ca1e to disappear +Oct 19 16:00:11.212: INFO: Pod pod-projected-configmaps-0d208d21-7674-4870-b092-c25c10a3ca1e no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:00:11.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-4304" for this suite. +•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":346,"completed":47,"skipped":893,"failed":0} +SSSSSSSSS +------------------------------ +[sig-network] DNS + should provide DNS for pods for Hostname [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:00:11.222: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename dns +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-2679 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a test headless service +STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-2679.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-2679.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2679.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-2679.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-2679.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2679.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done + +STEP: creating a pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Oct 19 16:00:13.559: INFO: DNS probes using dns-2679/dns-test-c3c71043-ab71-4158-be0a-b163c3caad9d succeeded + +STEP: deleting the pod +STEP: deleting the test headless service +[AfterEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:00:13.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-2679" for this suite. +•{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":346,"completed":48,"skipped":902,"failed":0} +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] PreStop + should call prestop when killing a pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] PreStop + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:00:13.580: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename prestop +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in prestop-2621 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] PreStop + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:157 +[It] should call prestop when killing a pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating server pod server in namespace prestop-2621 +STEP: Waiting for pods to come up. +STEP: Creating tester pod tester in namespace prestop-2621 +STEP: Deleting pre-stop pod +Oct 19 16:00:22.843: INFO: Saw: { + "Hostname": "server", + "Sent": null, + "Received": { + "prestop": 1 + }, + "Errors": null, + "Log": [ + "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", + "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." + ], + "StillContactingPeers": true +} +STEP: Deleting the server pod +[AfterEach] [sig-node] PreStop + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:00:22.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "prestop-2621" for this suite. +•{"msg":"PASSED [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":346,"completed":49,"skipped":921,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:00:22.863: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-6783 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name projected-configmap-test-volume-map-4b9c159b-1b0d-4a55-8e50-fddf861ce697 +STEP: Creating a pod to test consume configMaps +Oct 19 16:00:23.016: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-22e3bb3a-6c19-459e-a568-9808a4def3d8" in namespace "projected-6783" to be "Succeeded or Failed" +Oct 19 16:00:23.019: INFO: Pod "pod-projected-configmaps-22e3bb3a-6c19-459e-a568-9808a4def3d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.875496ms +Oct 19 16:00:25.032: INFO: Pod "pod-projected-configmaps-22e3bb3a-6c19-459e-a568-9808a4def3d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.015684656s +STEP: Saw pod success +Oct 19 16:00:25.032: INFO: Pod "pod-projected-configmaps-22e3bb3a-6c19-459e-a568-9808a4def3d8" satisfied condition "Succeeded or Failed" +Oct 19 16:00:25.038: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod pod-projected-configmaps-22e3bb3a-6c19-459e-a568-9808a4def3d8 container agnhost-container: +STEP: delete the pod +Oct 19 16:00:25.094: INFO: Waiting for pod pod-projected-configmaps-22e3bb3a-6c19-459e-a568-9808a4def3d8 to disappear +Oct 19 16:00:25.097: INFO: Pod pod-projected-configmaps-22e3bb3a-6c19-459e-a568-9808a4def3d8 no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:00:25.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-6783" for this suite. +•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":346,"completed":50,"skipped":936,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:00:25.107: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-6656 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 19 16:00:25.679: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 19 16:00:28.695: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API +STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API +STEP: Creating a dummy validating-webhook-configuration object +STEP: Deleting the validating-webhook-configuration, which should be possible to remove +STEP: Creating a dummy mutating-webhook-configuration object +STEP: Deleting the mutating-webhook-configuration, which should be possible to remove +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:00:28.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-6656" for this suite. +STEP: Destroying namespace "webhook-6656-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":346,"completed":51,"skipped":961,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Networking Granular Checks: Pods + should function for intra-pod communication: http [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Networking + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:00:28.917: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename pod-network-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pod-network-test-6284 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should function for intra-pod communication: http [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Performing setup for networking test in namespace pod-network-test-6284 +STEP: creating a selector +STEP: Creating the service pods in kubernetes +Oct 19 16:00:29.057: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +Oct 19 16:00:29.091: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Oct 19 16:00:31.095: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 19 16:00:33.095: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 19 16:00:35.104: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 19 16:00:37.096: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 19 16:00:39.096: INFO: The status of Pod netserver-0 is Running (Ready = true) +Oct 19 16:00:39.102: INFO: The status of Pod netserver-1 is Running (Ready = false) +Oct 19 16:00:41.107: INFO: The status of Pod netserver-1 is Running (Ready = true) +STEP: Creating test pods +Oct 19 16:00:43.130: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 +Oct 19 16:00:43.130: INFO: Breadth first check of 100.96.0.60 on host 10.250.1.123... +Oct 19 16:00:43.133: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://100.96.0.61:9080/dial?request=hostname&protocol=http&host=100.96.0.60&port=8083&tries=1'] Namespace:pod-network-test-6284 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 19 16:00:43.134: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 19 16:00:43.299: INFO: Waiting for responses: map[] +Oct 19 16:00:43.299: INFO: reached 100.96.0.60 after 0/1 tries +Oct 19 16:00:43.299: INFO: Breadth first check of 100.96.1.30 on host 10.250.3.120... +Oct 19 16:00:43.304: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://100.96.0.61:9080/dial?request=hostname&protocol=http&host=100.96.1.30&port=8083&tries=1'] Namespace:pod-network-test-6284 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 19 16:00:43.304: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 19 16:00:43.564: INFO: Waiting for responses: map[] +Oct 19 16:00:43.564: INFO: reached 100.96.1.30 after 0/1 tries +Oct 19 16:00:43.564: INFO: Going to retry 0 out of 2 pods.... +[AfterEach] [sig-network] Networking + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:00:43.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pod-network-test-6284" for this suite. +•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":346,"completed":52,"skipped":999,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl logs + should be able to retrieve and filter logs [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:00:43.572: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-7334 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[BeforeEach] Kubectl logs + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1396 +STEP: creating an pod +Oct 19 16:00:43.712: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-7334 run logs-generator --image=k8s.gcr.io/e2e-test-images/agnhost:2.32 --restart=Never --pod-running-timeout=2m0s -- logs-generator --log-lines-total 100 --run-duration 20s' +Oct 19 16:00:43.770: INFO: stderr: "" +Oct 19 16:00:43.770: INFO: stdout: "pod/logs-generator created\n" +[It] should be able to retrieve and filter logs [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Waiting for log generator to start. +Oct 19 16:00:43.770: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] +Oct 19 16:00:43.770: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-7334" to be "running and ready, or succeeded" +Oct 19 16:00:43.773: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 3.285601ms +Oct 19 16:00:45.777: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 2.007057638s +Oct 19 16:00:45.777: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" +Oct 19 16:00:45.777: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] +STEP: checking for a matching strings +Oct 19 16:00:45.777: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-7334 logs logs-generator logs-generator' +Oct 19 16:00:45.835: INFO: stderr: "" +Oct 19 16:00:45.835: INFO: stdout: "I1019 16:00:44.379998 1 logs_generator.go:76] 0 POST /api/v1/namespaces/kube-system/pods/lz7p 393\nI1019 16:00:44.580066 1 logs_generator.go:76] 1 GET /api/v1/namespaces/kube-system/pods/gm5 260\nI1019 16:00:44.780593 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/default/pods/g6fz 209\nI1019 16:00:44.980889 1 logs_generator.go:76] 3 POST /api/v1/namespaces/default/pods/jj24 536\nI1019 16:00:45.180120 1 logs_generator.go:76] 4 POST /api/v1/namespaces/kube-system/pods/sr9n 230\nI1019 16:00:45.380409 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/default/pods/vgw9 371\nI1019 16:00:45.580695 1 logs_generator.go:76] 6 POST /api/v1/namespaces/default/pods/7qc 422\nI1019 16:00:45.780856 1 logs_generator.go:76] 7 GET /api/v1/namespaces/kube-system/pods/djrv 568\n" +STEP: limiting log lines +Oct 19 16:00:45.835: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-7334 logs logs-generator logs-generator --tail=1' +Oct 19 16:00:45.935: INFO: stderr: "" +Oct 19 16:00:45.935: INFO: stdout: "I1019 16:00:45.780856 1 logs_generator.go:76] 7 GET /api/v1/namespaces/kube-system/pods/djrv 568\n" +Oct 19 16:00:45.935: INFO: got output "I1019 16:00:45.780856 1 logs_generator.go:76] 7 GET /api/v1/namespaces/kube-system/pods/djrv 568\n" +STEP: limiting log bytes +Oct 19 16:00:45.935: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-7334 logs logs-generator logs-generator --limit-bytes=1' +Oct 19 16:00:45.993: INFO: stderr: "" +Oct 19 16:00:45.993: INFO: stdout: "I" +Oct 19 16:00:45.993: INFO: got output "I" +STEP: exposing timestamps +Oct 19 16:00:45.993: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-7334 logs logs-generator logs-generator --tail=1 --timestamps' +Oct 19 16:00:46.046: INFO: stderr: "" +Oct 19 16:00:46.046: INFO: stdout: "2021-10-19T16:00:45.980151081Z I1019 16:00:45.980089 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/kube-system/pods/4hl 471\n" +Oct 19 16:00:46.046: INFO: got output "2021-10-19T16:00:45.980151081Z I1019 16:00:45.980089 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/kube-system/pods/4hl 471\n" +STEP: restricting to a time range +Oct 19 16:00:48.548: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-7334 logs logs-generator logs-generator --since=1s' +Oct 19 16:00:48.607: INFO: stderr: "" +Oct 19 16:00:48.607: INFO: stdout: "I1019 16:00:47.780344 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/default/pods/22b 590\nI1019 16:00:47.980636 1 logs_generator.go:76] 18 GET /api/v1/namespaces/default/pods/2x5k 462\nI1019 16:00:48.180902 1 logs_generator.go:76] 19 GET /api/v1/namespaces/default/pods/5qf 502\nI1019 16:00:48.380128 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/ns/pods/6k9 342\nI1019 16:00:48.580540 1 logs_generator.go:76] 21 GET /api/v1/namespaces/kube-system/pods/bwcf 481\n" +Oct 19 16:00:48.607: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-7334 logs logs-generator logs-generator --since=24h' +Oct 19 16:00:48.663: INFO: stderr: "" +Oct 19 16:00:48.663: INFO: stdout: "I1019 16:00:44.379998 1 logs_generator.go:76] 0 POST /api/v1/namespaces/kube-system/pods/lz7p 393\nI1019 16:00:44.580066 1 logs_generator.go:76] 1 GET /api/v1/namespaces/kube-system/pods/gm5 260\nI1019 16:00:44.780593 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/default/pods/g6fz 209\nI1019 16:00:44.980889 1 logs_generator.go:76] 3 POST /api/v1/namespaces/default/pods/jj24 536\nI1019 16:00:45.180120 1 logs_generator.go:76] 4 POST /api/v1/namespaces/kube-system/pods/sr9n 230\nI1019 16:00:45.380409 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/default/pods/vgw9 371\nI1019 16:00:45.580695 1 logs_generator.go:76] 6 POST /api/v1/namespaces/default/pods/7qc 422\nI1019 16:00:45.780856 1 logs_generator.go:76] 7 GET /api/v1/namespaces/kube-system/pods/djrv 568\nI1019 16:00:45.980089 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/kube-system/pods/4hl 471\nI1019 16:00:46.180198 1 logs_generator.go:76] 9 GET /api/v1/namespaces/default/pods/4bps 407\nI1019 16:00:46.380501 1 logs_generator.go:76] 10 GET /api/v1/namespaces/default/pods/bnl 237\nI1019 16:00:46.580791 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/kube-system/pods/v6f 397\nI1019 16:00:46.780031 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/default/pods/4sqv 490\nI1019 16:00:46.980322 1 logs_generator.go:76] 13 POST /api/v1/namespaces/default/pods/6bw 575\nI1019 16:00:47.180610 1 logs_generator.go:76] 14 PUT /api/v1/namespaces/kube-system/pods/5qqj 309\nI1019 16:00:47.380898 1 logs_generator.go:76] 15 GET /api/v1/namespaces/kube-system/pods/kmp 210\nI1019 16:00:47.580056 1 logs_generator.go:76] 16 GET /api/v1/namespaces/ns/pods/kb6 378\nI1019 16:00:47.780344 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/default/pods/22b 590\nI1019 16:00:47.980636 1 logs_generator.go:76] 18 GET /api/v1/namespaces/default/pods/2x5k 462\nI1019 16:00:48.180902 1 logs_generator.go:76] 19 GET /api/v1/namespaces/default/pods/5qf 502\nI1019 16:00:48.380128 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/ns/pods/6k9 342\nI1019 16:00:48.580540 1 logs_generator.go:76] 21 GET /api/v1/namespaces/kube-system/pods/bwcf 481\n" +[AfterEach] Kubectl logs + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1401 +Oct 19 16:00:48.663: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-7334 delete pod logs-generator' +Oct 19 16:00:49.597: INFO: stderr: "" +Oct 19 16:00:49.597: INFO: stdout: "pod \"logs-generator\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:00:49.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-7334" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":346,"completed":53,"skipped":1012,"failed":0} +SS +------------------------------ +[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + should be able to convert from CR v1 to CR v2 [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:00:49.606: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename crd-webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-webhook-5983 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 +STEP: Setting up server cert +STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication +STEP: Deploying the custom resource conversion webhook pod +STEP: Wait for the deployment to be ready +Oct 19 16:00:50.047: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 19 16:00:53.068: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 +[It] should be able to convert from CR v1 to CR v2 [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 19 16:00:53.072: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Creating a v1 custom resource +STEP: v2 custom resource should be converted +[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:00:56.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-webhook-5983" for this suite. +[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 +•{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":346,"completed":54,"skipped":1014,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Security Context + should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:00:56.552: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename security-context +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-888 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser +Oct 19 16:00:56.701: INFO: Waiting up to 5m0s for pod "security-context-2e044e7e-ff35-49ab-b0ce-97cc4d5531c9" in namespace "security-context-888" to be "Succeeded or Failed" +Oct 19 16:00:56.705: INFO: Pod "security-context-2e044e7e-ff35-49ab-b0ce-97cc4d5531c9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.929855ms +Oct 19 16:00:58.709: INFO: Pod "security-context-2e044e7e-ff35-49ab-b0ce-97cc4d5531c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007516697s +STEP: Saw pod success +Oct 19 16:00:58.709: INFO: Pod "security-context-2e044e7e-ff35-49ab-b0ce-97cc4d5531c9" satisfied condition "Succeeded or Failed" +Oct 19 16:00:58.712: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod security-context-2e044e7e-ff35-49ab-b0ce-97cc4d5531c9 container test-container: +STEP: delete the pod +Oct 19 16:00:58.727: INFO: Waiting for pod security-context-2e044e7e-ff35-49ab-b0ce-97cc4d5531c9 to disappear +Oct 19 16:00:58.730: INFO: Pod security-context-2e044e7e-ff35-49ab-b0ce-97cc4d5531c9 no longer exists +[AfterEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:00:58.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "security-context-888" for this suite. +•{"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":346,"completed":55,"skipped":1071,"failed":0} +SSSSSSSS +------------------------------ +[sig-apps] Deployment + should validate Deployment Status endpoints [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:00:58.738: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename deployment +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in deployment-4122 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 +[It] should validate Deployment Status endpoints [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a Deployment +Oct 19 16:00:58.876: INFO: Creating simple deployment test-deployment-lvl58 +Oct 19 16:00:58.892: INFO: deployment "test-deployment-lvl58" doesn't have the required revision set +STEP: Getting /status +Oct 19 16:01:00.910: INFO: Deployment test-deployment-lvl58 has Conditions: [{Available True 2021-10-19 16:00:59 +0000 UTC 2021-10-19 16:00:59 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2021-10-19 16:00:59 +0000 UTC 2021-10-19 16:00:58 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-lvl58-794dd694d8" has successfully progressed.}] +STEP: updating Deployment Status +Oct 19 16:01:00.918: INFO: updatedStatus.Conditions: []v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770256059, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770256059, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770256059, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770256058, loc:(*time.Location)(0xa09bc80)}}, Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"test-deployment-lvl58-794dd694d8\" has successfully progressed."}, v1.DeploymentCondition{Type:"StatusUpdate", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"E2E", Message:"Set from e2e test"}} +STEP: watching for the Deployment status to be updated +Oct 19 16:01:00.920: INFO: Observed &Deployment event: ADDED +Oct 19 16:01:00.920: INFO: Observed Deployment test-deployment-lvl58 in namespace deployment-4122 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2021-10-19 16:00:58 +0000 UTC 2021-10-19 16:00:58 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-lvl58-794dd694d8"} +Oct 19 16:01:00.920: INFO: Observed &Deployment event: MODIFIED +Oct 19 16:01:00.920: INFO: Observed Deployment test-deployment-lvl58 in namespace deployment-4122 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2021-10-19 16:00:58 +0000 UTC 2021-10-19 16:00:58 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-lvl58-794dd694d8"} +Oct 19 16:01:00.920: INFO: Observed Deployment test-deployment-lvl58 in namespace deployment-4122 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2021-10-19 16:00:58 +0000 UTC 2021-10-19 16:00:58 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} +Oct 19 16:01:00.921: INFO: Observed &Deployment event: MODIFIED +Oct 19 16:01:00.921: INFO: Observed Deployment test-deployment-lvl58 in namespace deployment-4122 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2021-10-19 16:00:58 +0000 UTC 2021-10-19 16:00:58 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} +Oct 19 16:01:00.921: INFO: Observed Deployment test-deployment-lvl58 in namespace deployment-4122 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2021-10-19 16:00:58 +0000 UTC 2021-10-19 16:00:58 +0000 UTC ReplicaSetUpdated ReplicaSet "test-deployment-lvl58-794dd694d8" is progressing.} +Oct 19 16:01:00.921: INFO: Observed &Deployment event: MODIFIED +Oct 19 16:01:00.921: INFO: Observed Deployment test-deployment-lvl58 in namespace deployment-4122 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2021-10-19 16:00:59 +0000 UTC 2021-10-19 16:00:59 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} +Oct 19 16:01:00.921: INFO: Observed Deployment test-deployment-lvl58 in namespace deployment-4122 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2021-10-19 16:00:59 +0000 UTC 2021-10-19 16:00:58 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-lvl58-794dd694d8" has successfully progressed.} +Oct 19 16:01:00.921: INFO: Observed &Deployment event: MODIFIED +Oct 19 16:01:00.921: INFO: Observed Deployment test-deployment-lvl58 in namespace deployment-4122 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2021-10-19 16:00:59 +0000 UTC 2021-10-19 16:00:59 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} +Oct 19 16:01:00.921: INFO: Observed Deployment test-deployment-lvl58 in namespace deployment-4122 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2021-10-19 16:00:59 +0000 UTC 2021-10-19 16:00:58 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-lvl58-794dd694d8" has successfully progressed.} +Oct 19 16:01:00.921: INFO: Found Deployment test-deployment-lvl58 in namespace deployment-4122 with labels: map[e2e:testing name:httpd] annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} +Oct 19 16:01:00.921: INFO: Deployment test-deployment-lvl58 has an updated status +STEP: patching the Statefulset Status +Oct 19 16:01:00.921: INFO: Patch payload: {"status":{"conditions":[{"type":"StatusPatched","status":"True"}]}} +Oct 19 16:01:00.925: INFO: Patched status conditions: []v1.DeploymentCondition{v1.DeploymentCondition{Type:"StatusPatched", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"", Message:""}} +STEP: watching for the Deployment status to be patched +Oct 19 16:01:00.929: INFO: Observed &Deployment event: ADDED +Oct 19 16:01:00.929: INFO: Observed deployment test-deployment-lvl58 in namespace deployment-4122 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2021-10-19 16:00:58 +0000 UTC 2021-10-19 16:00:58 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-lvl58-794dd694d8"} +Oct 19 16:01:00.929: INFO: Observed &Deployment event: MODIFIED +Oct 19 16:01:00.929: INFO: Observed deployment test-deployment-lvl58 in namespace deployment-4122 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2021-10-19 16:00:58 +0000 UTC 2021-10-19 16:00:58 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-lvl58-794dd694d8"} +Oct 19 16:01:00.929: INFO: Observed deployment test-deployment-lvl58 in namespace deployment-4122 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2021-10-19 16:00:58 +0000 UTC 2021-10-19 16:00:58 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} +Oct 19 16:01:00.929: INFO: Observed &Deployment event: MODIFIED +Oct 19 16:01:00.929: INFO: Observed deployment test-deployment-lvl58 in namespace deployment-4122 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2021-10-19 16:00:58 +0000 UTC 2021-10-19 16:00:58 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} +Oct 19 16:01:00.929: INFO: Observed deployment test-deployment-lvl58 in namespace deployment-4122 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2021-10-19 16:00:58 +0000 UTC 2021-10-19 16:00:58 +0000 UTC ReplicaSetUpdated ReplicaSet "test-deployment-lvl58-794dd694d8" is progressing.} +Oct 19 16:01:00.929: INFO: Observed &Deployment event: MODIFIED +Oct 19 16:01:00.929: INFO: Observed deployment test-deployment-lvl58 in namespace deployment-4122 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2021-10-19 16:00:59 +0000 UTC 2021-10-19 16:00:59 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} +Oct 19 16:01:00.929: INFO: Observed deployment test-deployment-lvl58 in namespace deployment-4122 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2021-10-19 16:00:59 +0000 UTC 2021-10-19 16:00:58 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-lvl58-794dd694d8" has successfully progressed.} +Oct 19 16:01:00.930: INFO: Observed &Deployment event: MODIFIED +Oct 19 16:01:00.930: INFO: Observed deployment test-deployment-lvl58 in namespace deployment-4122 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2021-10-19 16:00:59 +0000 UTC 2021-10-19 16:00:59 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} +Oct 19 16:01:00.930: INFO: Observed deployment test-deployment-lvl58 in namespace deployment-4122 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2021-10-19 16:00:59 +0000 UTC 2021-10-19 16:00:58 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-lvl58-794dd694d8" has successfully progressed.} +Oct 19 16:01:00.930: INFO: Observed deployment test-deployment-lvl58 in namespace deployment-4122 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} +Oct 19 16:01:00.930: INFO: Observed &Deployment event: MODIFIED +Oct 19 16:01:00.930: INFO: Found deployment test-deployment-lvl58 in namespace deployment-4122 with labels: map[e2e:testing name:httpd] annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusPatched True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } +Oct 19 16:01:00.930: INFO: Deployment test-deployment-lvl58 has a patched status +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 +Oct 19 16:01:00.933: INFO: Deployment "test-deployment-lvl58": +&Deployment{ObjectMeta:{test-deployment-lvl58 deployment-4122 78353099-f2d2-445d-8439-23b47c87de77 8369 1 2021-10-19 16:00:58 +0000 UTC map[e2e:testing name:httpd] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2021-10-19 16:00:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {e2e.test Update apps/v1 2021-10-19 16:01:00 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"StatusPatched\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update apps/v1 2021-10-19 16:01:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{e2e: testing,name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[e2e:testing name:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005bcfc78 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:StatusPatched,Status:True,Reason:,Message:,LastUpdateTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:FoundNewReplicaSet,Message:Found new replica set "test-deployment-lvl58-794dd694d8",LastUpdateTime:2021-10-19 16:01:00 +0000 UTC,LastTransitionTime:2021-10-19 16:01:00 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} + +Oct 19 16:01:00.936: INFO: New ReplicaSet "test-deployment-lvl58-794dd694d8" of Deployment "test-deployment-lvl58": +&ReplicaSet{ObjectMeta:{test-deployment-lvl58-794dd694d8 deployment-4122 12233fa4-9247-4506-acad-ec2b65409d13 8359 1 2021-10-19 16:00:58 +0000 UTC map[e2e:testing name:httpd pod-template-hash:794dd694d8] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-deployment-lvl58 78353099-f2d2-445d-8439-23b47c87de77 0xc003890037 0xc003890038}] [] [{kube-controller-manager Update apps/v1 2021-10-19 16:00:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"78353099-f2d2-445d-8439-23b47c87de77\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-19 16:00:59 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{e2e: testing,name: httpd,pod-template-hash: 794dd694d8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[e2e:testing name:httpd pod-template-hash:794dd694d8] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0038900e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} +Oct 19 16:01:00.940: INFO: Pod "test-deployment-lvl58-794dd694d8-7lfx4" is available: +&Pod{ObjectMeta:{test-deployment-lvl58-794dd694d8-7lfx4 test-deployment-lvl58-794dd694d8- deployment-4122 bc6402a5-ff50-42f1-9b2d-383f883ce159 8358 0 2021-10-19 16:00:58 +0000 UTC map[e2e:testing name:httpd pod-template-hash:794dd694d8] map[cni.projectcalico.org/podIP:100.96.0.65/32 cni.projectcalico.org/podIPs:100.96.0.65/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet test-deployment-lvl58-794dd694d8 12233fa4-9247-4506-acad-ec2b65409d13 0xc003890497 0xc003890498}] [] [{kube-controller-manager Update v1 2021-10-19 16:00:58 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"12233fa4-9247-4506-acad-ec2b65409d13\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-19 16:00:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-19 16:00:59 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.0.65\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-8b79h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmhay-ddd.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8b79h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:00:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:00:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:00:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:00:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.1.123,PodIP:100.96.0.65,StartTime:2021-10-19 16:00:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-19 16:00:59 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://da0ac8f0f0547c1592bc201e6af80963684f4d26ed04b8227f6e1c1d4ab37fcc,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.0.65,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:01:00.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-4122" for this suite. +•{"msg":"PASSED [sig-apps] Deployment should validate Deployment Status endpoints [Conformance]","total":346,"completed":56,"skipped":1079,"failed":0} +SSSSSSSS +------------------------------ +[sig-instrumentation] Events + should delete a collection of events [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-instrumentation] Events + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:01:00.947: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename events +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in events-2680 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should delete a collection of events [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Create set of events +Oct 19 16:01:01.087: INFO: created test-event-1 +Oct 19 16:01:01.091: INFO: created test-event-2 +Oct 19 16:01:01.094: INFO: created test-event-3 +STEP: get a list of Events with a label in the current namespace +STEP: delete collection of events +Oct 19 16:01:01.098: INFO: requesting DeleteCollection of events +STEP: check that the list of events matches the requested quantity +Oct 19 16:01:01.106: INFO: requesting list of events to confirm quantity +[AfterEach] [sig-instrumentation] Events + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:01:01.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "events-2680" for this suite. +•{"msg":"PASSED [sig-instrumentation] Events should delete a collection of events [Conformance]","total":346,"completed":57,"skipped":1087,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-scheduling] LimitRange + should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-scheduling] LimitRange + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:01:01.117: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename limitrange +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in limitrange-3859 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a LimitRange +STEP: Setting up watch +STEP: Submitting a LimitRange +Oct 19 16:01:01.255: INFO: observed the limitRanges list +STEP: Verifying LimitRange creation was observed +STEP: Fetching the LimitRange to ensure it has proper values +Oct 19 16:01:01.262: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] +Oct 19 16:01:01.262: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] +STEP: Creating a Pod with no resource requirements +STEP: Ensuring Pod has resource requirements applied from LimitRange +Oct 19 16:01:01.272: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] +Oct 19 16:01:01.272: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] +STEP: Creating a Pod with partial resource requirements +STEP: Ensuring Pod has merged resource requirements applied from LimitRange +Oct 19 16:01:01.283: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] +Oct 19 16:01:01.283: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] +STEP: Failing to create a Pod with less than min resources +STEP: Failing to create a Pod with more than max resources +STEP: Updating a LimitRange +STEP: Verifying LimitRange updating is effective +STEP: Creating a Pod with less than former min resources +STEP: Failing to create a Pod with more than max resources +STEP: Deleting a LimitRange +STEP: Verifying the LimitRange was deleted +Oct 19 16:01:08.325: INFO: limitRange is already deleted +STEP: Creating a Pod with more than former max resources +[AfterEach] [sig-scheduling] LimitRange + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:01:08.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "limitrange-3859" for this suite. +•{"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":346,"completed":58,"skipped":1123,"failed":0} +SSS +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:01:08.345: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-5291 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating secret with name secret-test-map-6912447b-ec64-47b2-937e-9e1a317c0224 +STEP: Creating a pod to test consume secrets +Oct 19 16:01:08.492: INFO: Waiting up to 5m0s for pod "pod-secrets-d0d67dc4-18a7-4d7b-94f1-d97116974442" in namespace "secrets-5291" to be "Succeeded or Failed" +Oct 19 16:01:08.495: INFO: Pod "pod-secrets-d0d67dc4-18a7-4d7b-94f1-d97116974442": Phase="Pending", Reason="", readiness=false. Elapsed: 3.029014ms +Oct 19 16:01:10.499: INFO: Pod "pod-secrets-d0d67dc4-18a7-4d7b-94f1-d97116974442": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006671671s +STEP: Saw pod success +Oct 19 16:01:10.499: INFO: Pod "pod-secrets-d0d67dc4-18a7-4d7b-94f1-d97116974442" satisfied condition "Succeeded or Failed" +Oct 19 16:01:10.502: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod pod-secrets-d0d67dc4-18a7-4d7b-94f1-d97116974442 container secret-volume-test: +STEP: delete the pod +Oct 19 16:01:10.514: INFO: Waiting for pod pod-secrets-d0d67dc4-18a7-4d7b-94f1-d97116974442 to disappear +Oct 19 16:01:10.517: INFO: Pod pod-secrets-d0d67dc4-18a7-4d7b-94f1-d97116974442 no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:01:10.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-5291" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":346,"completed":59,"skipped":1126,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Variable Expansion + should succeed in writing subpaths in container [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:01:10.525: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename var-expansion +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in var-expansion-3758 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should succeed in writing subpaths in container [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pod +STEP: waiting for pod running +STEP: creating a file in subpath +Oct 19 16:01:12.678: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-3758 PodName:var-expansion-3660110d-f9c4-48ec-a62e-ef4df298883a ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 19 16:01:12.678: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: test for file in mounted path +Oct 19 16:01:12.836: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-3758 PodName:var-expansion-3660110d-f9c4-48ec-a62e-ef4df298883a ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 19 16:01:12.836: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: updating the annotation value +Oct 19 16:01:13.477: INFO: Successfully updated pod "var-expansion-3660110d-f9c4-48ec-a62e-ef4df298883a" +STEP: waiting for annotated pod running +STEP: deleting the pod gracefully +Oct 19 16:01:13.481: INFO: Deleting pod "var-expansion-3660110d-f9c4-48ec-a62e-ef4df298883a" in namespace "var-expansion-3758" +Oct 19 16:01:13.484: INFO: Wait up to 5m0s for pod "var-expansion-3660110d-f9c4-48ec-a62e-ef4df298883a" to be fully deleted +[AfterEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:01:47.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-3758" for this suite. +•{"msg":"PASSED [sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","total":346,"completed":60,"skipped":1142,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:01:47.514: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-1692 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name projected-configmap-test-volume-82cb7876-1eef-4347-ba0a-878f6800cb9c +STEP: Creating a pod to test consume configMaps +Oct 19 16:01:47.720: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-23468c67-b5d8-498d-93c3-d6626f010aaa" in namespace "projected-1692" to be "Succeeded or Failed" +Oct 19 16:01:47.723: INFO: Pod "pod-projected-configmaps-23468c67-b5d8-498d-93c3-d6626f010aaa": Phase="Pending", Reason="", readiness=false. Elapsed: 3.056246ms +Oct 19 16:01:49.727: INFO: Pod "pod-projected-configmaps-23468c67-b5d8-498d-93c3-d6626f010aaa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007115567s +STEP: Saw pod success +Oct 19 16:01:49.727: INFO: Pod "pod-projected-configmaps-23468c67-b5d8-498d-93c3-d6626f010aaa" satisfied condition "Succeeded or Failed" +Oct 19 16:01:49.730: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod pod-projected-configmaps-23468c67-b5d8-498d-93c3-d6626f010aaa container agnhost-container: +STEP: delete the pod +Oct 19 16:01:49.745: INFO: Waiting for pod pod-projected-configmaps-23468c67-b5d8-498d-93c3-d6626f010aaa to disappear +Oct 19 16:01:49.748: INFO: Pod pod-projected-configmaps-23468c67-b5d8-498d-93c3-d6626f010aaa no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:01:49.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-1692" for this suite. +•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":61,"skipped":1209,"failed":0} +SSSS +------------------------------ +[sig-node] Security Context + should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:01:49.758: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename security-context +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-5802 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser +Oct 19 16:01:49.905: INFO: Waiting up to 5m0s for pod "security-context-45478bec-201d-4a0b-83a2-44d296013577" in namespace "security-context-5802" to be "Succeeded or Failed" +Oct 19 16:01:49.914: INFO: Pod "security-context-45478bec-201d-4a0b-83a2-44d296013577": Phase="Pending", Reason="", readiness=false. Elapsed: 9.202976ms +Oct 19 16:01:51.922: INFO: Pod "security-context-45478bec-201d-4a0b-83a2-44d296013577": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.016314835s +STEP: Saw pod success +Oct 19 16:01:51.922: INFO: Pod "security-context-45478bec-201d-4a0b-83a2-44d296013577" satisfied condition "Succeeded or Failed" +Oct 19 16:01:51.925: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod security-context-45478bec-201d-4a0b-83a2-44d296013577 container test-container: +STEP: delete the pod +Oct 19 16:01:51.940: INFO: Waiting for pod security-context-45478bec-201d-4a0b-83a2-44d296013577 to disappear +Oct 19 16:01:51.943: INFO: Pod security-context-45478bec-201d-4a0b-83a2-44d296013577 no longer exists +[AfterEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:01:51.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "security-context-5802" for this suite. +•{"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":346,"completed":62,"skipped":1213,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-node] Probing container + with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:01:51.952: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-probe +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-7710 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 +[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:02:52.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-7710" for this suite. +•{"msg":"PASSED [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":346,"completed":63,"skipped":1226,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:02:52.110: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-5964 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 19 16:02:52.260: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cd94b405-9b8a-4573-9c9a-9dcc8b36d29e" in namespace "projected-5964" to be "Succeeded or Failed" +Oct 19 16:02:52.263: INFO: Pod "downwardapi-volume-cd94b405-9b8a-4573-9c9a-9dcc8b36d29e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.873372ms +Oct 19 16:02:54.267: INFO: Pod "downwardapi-volume-cd94b405-9b8a-4573-9c9a-9dcc8b36d29e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007171109s +STEP: Saw pod success +Oct 19 16:02:54.267: INFO: Pod "downwardapi-volume-cd94b405-9b8a-4573-9c9a-9dcc8b36d29e" satisfied condition "Succeeded or Failed" +Oct 19 16:02:54.270: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod downwardapi-volume-cd94b405-9b8a-4573-9c9a-9dcc8b36d29e container client-container: +STEP: delete the pod +Oct 19 16:02:54.284: INFO: Waiting for pod downwardapi-volume-cd94b405-9b8a-4573-9c9a-9dcc8b36d29e to disappear +Oct 19 16:02:54.287: INFO: Pod downwardapi-volume-cd94b405-9b8a-4573-9c9a-9dcc8b36d29e no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:02:54.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-5964" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":64,"skipped":1263,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + removes definition from spec when one version gets changed to not be served [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:02:54.296: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-6732 +STEP: Waiting for a default service account to be provisioned in namespace +[It] removes definition from spec when one version gets changed to not be served [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: set up a multi version CRD +Oct 19 16:02:54.432: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: mark a version not serverd +STEP: check the unserved version gets removed +STEP: check the other version is not changed +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:03:10.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-6732" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":346,"completed":65,"skipped":1276,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should provide secure master service [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:03:10.849: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-196 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should provide secure master service [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:03:10.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-196" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":346,"completed":66,"skipped":1319,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicaSet + should serve a basic image on each replica with a public image [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:03:11.003: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename replicaset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replicaset-8161 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should serve a basic image on each replica with a public image [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 19 16:03:11.138: INFO: Creating ReplicaSet my-hostname-basic-42c6f717-bde6-4933-b022-64a7949df8f4 +Oct 19 16:03:11.146: INFO: Pod name my-hostname-basic-42c6f717-bde6-4933-b022-64a7949df8f4: Found 0 pods out of 1 +Oct 19 16:03:16.153: INFO: Pod name my-hostname-basic-42c6f717-bde6-4933-b022-64a7949df8f4: Found 1 pods out of 1 +Oct 19 16:03:16.153: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-42c6f717-bde6-4933-b022-64a7949df8f4" is running +Oct 19 16:03:16.155: INFO: Pod "my-hostname-basic-42c6f717-bde6-4933-b022-64a7949df8f4-ns4kp" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-19 16:03:11 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-19 16:03:11 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-19 16:03:11 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-19 16:03:11 +0000 UTC Reason: Message:}]) +Oct 19 16:03:16.155: INFO: Trying to dial the pod +Oct 19 16:03:21.216: INFO: Controller my-hostname-basic-42c6f717-bde6-4933-b022-64a7949df8f4: Got expected result from replica 1 [my-hostname-basic-42c6f717-bde6-4933-b022-64a7949df8f4-ns4kp]: "my-hostname-basic-42c6f717-bde6-4933-b022-64a7949df8f4-ns4kp", 1 of 1 required successes so far +[AfterEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:03:21.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replicaset-8161" for this suite. +•{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":346,"completed":67,"skipped":1361,"failed":0} +SSSSSS +------------------------------ +[sig-node] Probing container + should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:03:21.226: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-probe +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-110 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 +[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating pod test-webserver-742bf41b-8971-4852-81c2-2a38b9caa249 in namespace container-probe-110 +Oct 19 16:03:23.383: INFO: Started pod test-webserver-742bf41b-8971-4852-81c2-2a38b9caa249 in namespace container-probe-110 +STEP: checking the pod's current state and verifying that restartCount is present +Oct 19 16:03:23.386: INFO: Initial restart count of pod test-webserver-742bf41b-8971-4852-81c2-2a38b9caa249 is 0 +STEP: deleting the pod +[AfterEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:07:23.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-110" for this suite. +•{"msg":"PASSED [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":346,"completed":68,"skipped":1367,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + should perform canary updates and phased rolling updates of template modifications [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:07:23.994: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename statefulset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-96 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:92 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:107 +STEP: Creating service test in namespace statefulset-96 +[It] should perform canary updates and phased rolling updates of template modifications [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a new StatefulSet +Oct 19 16:07:24.139: INFO: Found 0 stateful pods, waiting for 3 +Oct 19 16:07:34.145: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true +Oct 19 16:07:34.145: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true +Oct 19 16:07:34.145: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true +STEP: Updating stateful set template: update image from k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 to k8s.gcr.io/e2e-test-images/httpd:2.4.39-1 +Oct 19 16:07:34.173: INFO: Updating stateful set ss2 +STEP: Creating a new revision +STEP: Not applying an update when the partition is greater than the number of replicas +STEP: Performing a canary update +Oct 19 16:07:44.204: INFO: Updating stateful set ss2 +Oct 19 16:07:44.220: INFO: Waiting for Pod statefulset-96/ss2-2 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 +STEP: Restoring Pods to the correct revision when they are deleted +Oct 19 16:07:54.249: INFO: Found 1 stateful pods, waiting for 3 +Oct 19 16:08:04.253: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true +Oct 19 16:08:04.253: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true +Oct 19 16:08:04.253: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true +STEP: Performing a phased rolling update +Oct 19 16:08:04.276: INFO: Updating stateful set ss2 +Oct 19 16:08:04.284: INFO: Waiting for Pod statefulset-96/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 +Oct 19 16:08:14.309: INFO: Updating stateful set ss2 +Oct 19 16:08:14.338: INFO: Waiting for StatefulSet statefulset-96/ss2 to complete update +Oct 19 16:08:14.338: INFO: Waiting for Pod statefulset-96/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118 +Oct 19 16:08:24.346: INFO: Deleting all statefulset in ns statefulset-96 +Oct 19 16:08:24.349: INFO: Scaling statefulset ss2 to 0 +Oct 19 16:08:34.366: INFO: Waiting for statefulset status.replicas updated to 0 +Oct 19 16:08:34.369: INFO: Deleting statefulset ss2 +[AfterEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:08:34.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-96" for this suite. +•{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":346,"completed":69,"skipped":1377,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should serve a basic endpoint from pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:08:34.392: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-3967 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should serve a basic endpoint from pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating service endpoint-test2 in namespace services-3967 +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3967 to expose endpoints map[] +Oct 19 16:08:34.545: INFO: successfully validated that service endpoint-test2 in namespace services-3967 exposes endpoints map[] +STEP: Creating pod pod1 in namespace services-3967 +Oct 19 16:08:34.556: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) +Oct 19 16:08:36.560: INFO: The status of Pod pod1 is Running (Ready = true) +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3967 to expose endpoints map[pod1:[80]] +Oct 19 16:08:36.575: INFO: successfully validated that service endpoint-test2 in namespace services-3967 exposes endpoints map[pod1:[80]] +STEP: Checking if the Service forwards traffic to pod1 +Oct 19 16:08:36.575: INFO: Creating new exec pod +Oct 19 16:08:39.591: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-3967 exec execpod7gdpj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80' +Oct 19 16:08:39.956: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 endpoint-test2 80\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n" +Oct 19 16:08:39.956: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 19 16:08:39.956: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-3967 exec execpod7gdpj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.67.58.96 80' +Oct 19 16:08:40.284: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 100.67.58.96 80\nConnection to 100.67.58.96 80 port [tcp/http] succeeded!\n" +Oct 19 16:08:40.284: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +STEP: Creating pod pod2 in namespace services-3967 +Oct 19 16:08:40.295: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) +Oct 19 16:08:42.299: INFO: The status of Pod pod2 is Running (Ready = true) +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3967 to expose endpoints map[pod1:[80] pod2:[80]] +Oct 19 16:08:42.317: INFO: successfully validated that service endpoint-test2 in namespace services-3967 exposes endpoints map[pod1:[80] pod2:[80]] +STEP: Checking if the Service forwards traffic to pod1 and pod2 +Oct 19 16:08:43.317: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-3967 exec execpod7gdpj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80' +Oct 19 16:08:43.511: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 endpoint-test2 80\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n" +Oct 19 16:08:43.511: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 19 16:08:43.511: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-3967 exec execpod7gdpj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.67.58.96 80' +Oct 19 16:08:43.771: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 100.67.58.96 80\nConnection to 100.67.58.96 80 port [tcp/http] succeeded!\n" +Oct 19 16:08:43.771: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +STEP: Deleting pod pod1 in namespace services-3967 +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3967 to expose endpoints map[pod2:[80]] +Oct 19 16:08:43.794: INFO: successfully validated that service endpoint-test2 in namespace services-3967 exposes endpoints map[pod2:[80]] +STEP: Checking if the Service forwards traffic to pod2 +Oct 19 16:08:44.795: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-3967 exec execpod7gdpj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80' +Oct 19 16:08:45.027: INFO: stderr: "+ nc -v -t -w 2 endpoint-test2 80\n+ echo hostName\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n" +Oct 19 16:08:45.027: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 19 16:08:45.027: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-3967 exec execpod7gdpj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.67.58.96 80' +Oct 19 16:08:45.292: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 100.67.58.96 80\nConnection to 100.67.58.96 80 port [tcp/http] succeeded!\n" +Oct 19 16:08:45.292: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +STEP: Deleting pod pod2 in namespace services-3967 +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3967 to expose endpoints map[] +Oct 19 16:08:45.310: INFO: successfully validated that service endpoint-test2 in namespace services-3967 exposes endpoints map[] +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:08:45.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-3967" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":346,"completed":70,"skipped":1465,"failed":0} + +------------------------------ +[sig-apps] Daemon set [Serial] + should update pod when spec was updated and update strategy is RollingUpdate [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:08:45.327: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename daemonsets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in daemonsets-4524 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:142 +[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 19 16:08:45.475: INFO: Creating simple daemon set daemon-set +STEP: Check that daemon pods launch on every node of the cluster. +Oct 19 16:08:45.485: INFO: Number of nodes with available pods: 0 +Oct 19 16:08:45.485: INFO: Node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 is running more than one daemon pod +Oct 19 16:08:46.521: INFO: Number of nodes with available pods: 1 +Oct 19 16:08:46.521: INFO: Node shoot--it--tmhay-ddd-worker-1-z1-67558-zh8gq is running more than one daemon pod +Oct 19 16:08:47.495: INFO: Number of nodes with available pods: 2 +Oct 19 16:08:47.495: INFO: Number of running nodes: 2, number of available pods: 2 +STEP: Update daemon pods image. +STEP: Check that daemon pods images are updated. +Oct 19 16:08:47.520: INFO: Wrong image for pod: daemon-set-xlmwp. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. +Oct 19 16:08:48.527: INFO: Wrong image for pod: daemon-set-xlmwp. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. +Oct 19 16:08:49.527: INFO: Wrong image for pod: daemon-set-xlmwp. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. +Oct 19 16:08:50.527: INFO: Pod daemon-set-fp74d is not available +Oct 19 16:08:50.527: INFO: Wrong image for pod: daemon-set-xlmwp. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. +Oct 19 16:08:51.527: INFO: Pod daemon-set-zx57g is not available +STEP: Check that daemon pods are still running on every node of the cluster. +Oct 19 16:08:51.536: INFO: Number of nodes with available pods: 1 +Oct 19 16:08:51.536: INFO: Node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 is running more than one daemon pod +Oct 19 16:08:52.544: INFO: Number of nodes with available pods: 2 +Oct 19 16:08:52.544: INFO: Number of running nodes: 2, number of available pods: 2 +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:108 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4524, will wait for the garbage collector to delete the pods +Oct 19 16:08:52.617: INFO: Deleting DaemonSet.extensions daemon-set took: 4.032281ms +Oct 19 16:08:52.718: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.93613ms +Oct 19 16:08:55.522: INFO: Number of nodes with available pods: 0 +Oct 19 16:08:55.522: INFO: Number of running nodes: 0, number of available pods: 0 +Oct 19 16:08:55.525: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"11211"},"items":null} + +Oct 19 16:08:55.527: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"11211"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:08:55.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-4524" for this suite. +•{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":346,"completed":71,"skipped":1465,"failed":0} +SSS +------------------------------ +[sig-storage] Projected configMap + updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:08:55.546: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-5262 +STEP: Waiting for a default service account to be provisioned in namespace +[It] updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating projection with configMap that has name projected-configmap-test-upd-5144a33b-5144-413a-88de-fb790aec12d9 +STEP: Creating the pod +Oct 19 16:08:55.697: INFO: The status of Pod pod-projected-configmaps-748dc2d1-55d4-41ce-bc1d-e898551febf1 is Pending, waiting for it to be Running (with Ready = true) +Oct 19 16:08:57.701: INFO: The status of Pod pod-projected-configmaps-748dc2d1-55d4-41ce-bc1d-e898551febf1 is Running (Ready = true) +STEP: Updating configmap projected-configmap-test-upd-5144a33b-5144-413a-88de-fb790aec12d9 +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:10:06.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-5262" for this suite. +•{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":72,"skipped":1468,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for CRD preserving unknown fields at the schema root [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:10:06.111: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-2371 +STEP: Waiting for a default service account to be provisioned in namespace +[It] works for CRD preserving unknown fields at the schema root [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 19 16:10:06.285: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: client-side validation (kubectl create and apply) allows request with any unknown properties +Oct 19 16:10:09.170: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-2371 --namespace=crd-publish-openapi-2371 create -f -' +Oct 19 16:10:09.458: INFO: stderr: "" +Oct 19 16:10:09.458: INFO: stdout: "e2e-test-crd-publish-openapi-2716-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" +Oct 19 16:10:09.458: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-2371 --namespace=crd-publish-openapi-2371 delete e2e-test-crd-publish-openapi-2716-crds test-cr' +Oct 19 16:10:09.522: INFO: stderr: "" +Oct 19 16:10:09.522: INFO: stdout: "e2e-test-crd-publish-openapi-2716-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" +Oct 19 16:10:09.522: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-2371 --namespace=crd-publish-openapi-2371 apply -f -' +Oct 19 16:10:09.652: INFO: stderr: "" +Oct 19 16:10:09.652: INFO: stdout: "e2e-test-crd-publish-openapi-2716-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" +Oct 19 16:10:09.652: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-2371 --namespace=crd-publish-openapi-2371 delete e2e-test-crd-publish-openapi-2716-crds test-cr' +Oct 19 16:10:09.702: INFO: stderr: "" +Oct 19 16:10:09.702: INFO: stdout: "e2e-test-crd-publish-openapi-2716-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" +STEP: kubectl explain works to explain CR +Oct 19 16:10:09.702: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-2371 explain e2e-test-crd-publish-openapi-2716-crds' +Oct 19 16:10:09.837: INFO: stderr: "" +Oct 19 16:10:09.837: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-2716-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:10:13.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-2371" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":346,"completed":73,"skipped":1482,"failed":0} +SSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:10:13.198: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-4717 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0777 on node default medium +Oct 19 16:10:13.340: INFO: Waiting up to 5m0s for pod "pod-fa54049e-08a1-4afa-89ff-5aeda1e02f3c" in namespace "emptydir-4717" to be "Succeeded or Failed" +Oct 19 16:10:13.343: INFO: Pod "pod-fa54049e-08a1-4afa-89ff-5aeda1e02f3c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.401877ms +Oct 19 16:10:15.347: INFO: Pod "pod-fa54049e-08a1-4afa-89ff-5aeda1e02f3c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007187738s +STEP: Saw pod success +Oct 19 16:10:15.347: INFO: Pod "pod-fa54049e-08a1-4afa-89ff-5aeda1e02f3c" satisfied condition "Succeeded or Failed" +Oct 19 16:10:15.350: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod pod-fa54049e-08a1-4afa-89ff-5aeda1e02f3c container test-container: +STEP: delete the pod +Oct 19 16:10:15.369: INFO: Waiting for pod pod-fa54049e-08a1-4afa-89ff-5aeda1e02f3c to disappear +Oct 19 16:10:15.372: INFO: Pod pod-fa54049e-08a1-4afa-89ff-5aeda1e02f3c no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:10:15.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-4717" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":74,"skipped":1488,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-instrumentation] Events API + should delete a collection of events [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-instrumentation] Events API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:10:15.381: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename events +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in events-2023 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-instrumentation] Events API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 +[It] should delete a collection of events [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Create set of events +STEP: get a list of Events with a label in the current namespace +STEP: delete a list of events +Oct 19 16:10:15.546: INFO: requesting DeleteCollection of events +STEP: check that the list of events matches the requested quantity +[AfterEach] [sig-instrumentation] Events API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:10:15.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "events-2023" for this suite. +•{"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":346,"completed":75,"skipped":1537,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should orphan pods created by rc if delete options say so [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:10:15.571: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename gc +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-7087 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should orphan pods created by rc if delete options say so [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the rc +STEP: delete the rc +STEP: wait for the rc to be deleted +STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods +STEP: Gathering metrics +Oct 19 16:10:55.752: INFO: For apiserver_request_total: +For apiserver_request_latency_seconds: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +W1019 16:10:55.752353 4339 metrics_grabber.go:151] Can't find kube-controller-manager pod. Grabbing metrics from kube-controller-manager is disabled. +Oct 19 16:10:55.752: INFO: Deleting pod "simpletest.rc-6kz8v" in namespace "gc-7087" +Oct 19 16:10:55.757: INFO: Deleting pod "simpletest.rc-7ttz6" in namespace "gc-7087" +Oct 19 16:10:55.764: INFO: Deleting pod "simpletest.rc-jr7ng" in namespace "gc-7087" +Oct 19 16:10:55.777: INFO: Deleting pod "simpletest.rc-kd5ws" in namespace "gc-7087" +Oct 19 16:10:55.783: INFO: Deleting pod "simpletest.rc-kg9xb" in namespace "gc-7087" +Oct 19 16:10:55.787: INFO: Deleting pod "simpletest.rc-l2d2b" in namespace "gc-7087" +Oct 19 16:10:55.794: INFO: Deleting pod "simpletest.rc-sz24q" in namespace "gc-7087" +Oct 19 16:10:55.799: INFO: Deleting pod "simpletest.rc-vgjx8" in namespace "gc-7087" +Oct 19 16:10:55.804: INFO: Deleting pod "simpletest.rc-vhl9r" in namespace "gc-7087" +Oct 19 16:10:55.811: INFO: Deleting pod "simpletest.rc-w7rjm" in namespace "gc-7087" +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:10:55.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-7087" for this suite. +•{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":346,"completed":76,"skipped":1547,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should delete RS created by deployment when not orphaning [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:10:55.823: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename gc +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-1752 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should delete RS created by deployment when not orphaning [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the deployment +STEP: Wait for the Deployment to create new ReplicaSet +STEP: delete the deployment +STEP: wait for all rs to be garbage collected +STEP: expected 0 pods, got 2 pods +STEP: Gathering metrics +Oct 19 16:10:57.002: INFO: For apiserver_request_total: +For apiserver_request_latency_seconds: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +W1019 16:10:57.002323 4339 metrics_grabber.go:151] Can't find kube-controller-manager pod. Grabbing metrics from kube-controller-manager is disabled. +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:10:57.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-1752" for this suite. +•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":346,"completed":77,"skipped":1562,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + custom resource defaulting for requests and from storage works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:10:57.009: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename custom-resource-definition +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in custom-resource-definition-7437 +STEP: Waiting for a default service account to be provisioned in namespace +[It] custom resource defaulting for requests and from storage works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 19 16:10:57.145: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:11:00.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "custom-resource-definition-7437" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":346,"completed":78,"skipped":1613,"failed":0} +SSSSS +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:11:00.255: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-4305 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating secret with name secret-test-078efbee-ae7f-44cf-8873-e5e192116c7e +STEP: Creating a pod to test consume secrets +Oct 19 16:11:00.401: INFO: Waiting up to 5m0s for pod "pod-secrets-734590e6-601e-48c8-8b4f-612d3cf90d63" in namespace "secrets-4305" to be "Succeeded or Failed" +Oct 19 16:11:00.404: INFO: Pod "pod-secrets-734590e6-601e-48c8-8b4f-612d3cf90d63": Phase="Pending", Reason="", readiness=false. Elapsed: 3.369304ms +Oct 19 16:11:02.409: INFO: Pod "pod-secrets-734590e6-601e-48c8-8b4f-612d3cf90d63": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008435305s +STEP: Saw pod success +Oct 19 16:11:02.409: INFO: Pod "pod-secrets-734590e6-601e-48c8-8b4f-612d3cf90d63" satisfied condition "Succeeded or Failed" +Oct 19 16:11:02.412: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod pod-secrets-734590e6-601e-48c8-8b4f-612d3cf90d63 container secret-volume-test: +STEP: delete the pod +Oct 19 16:11:02.427: INFO: Waiting for pod pod-secrets-734590e6-601e-48c8-8b4f-612d3cf90d63 to disappear +Oct 19 16:11:02.430: INFO: Pod pod-secrets-734590e6-601e-48c8-8b4f-612d3cf90d63 no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:11:02.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-4305" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":79,"skipped":1618,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should update annotations on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:11:02.439: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-9255 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should update annotations on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating the pod +Oct 19 16:11:02.585: INFO: The status of Pod annotationupdate19ca8a3f-655a-4a3a-85ad-78f453970172 is Pending, waiting for it to be Running (with Ready = true) +Oct 19 16:11:04.589: INFO: The status of Pod annotationupdate19ca8a3f-655a-4a3a-85ad-78f453970172 is Running (Ready = true) +Oct 19 16:11:05.110: INFO: Successfully updated pod "annotationupdate19ca8a3f-655a-4a3a-85ad-78f453970172" +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:11:09.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-9255" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":346,"completed":80,"skipped":1632,"failed":0} +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Namespaces [Serial] + should patch a Namespace [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:11:09.174: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename namespaces +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in namespaces-1457 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should patch a Namespace [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a Namespace +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nspatchtest-7e8d1546-7059-4cb0-ab95-3d9b0c7154b0-7497 +STEP: patching the Namespace +STEP: get the Namespace and ensuring it has the label +[AfterEach] [sig-api-machinery] Namespaces [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:11:09.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "namespaces-1457" for this suite. +STEP: Destroying namespace "nspatchtest-7e8d1546-7059-4cb0-ab95-3d9b0c7154b0-7497" for this suite. +•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":346,"completed":81,"skipped":1650,"failed":0} + +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:11:09.490: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-368 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name projected-configmap-test-volume-15b45f0f-cd80-47e1-a550-7e8b3143b063 +STEP: Creating a pod to test consume configMaps +Oct 19 16:11:09.637: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0e609d02-ea7c-4ba4-87a8-f05fb5606d80" in namespace "projected-368" to be "Succeeded or Failed" +Oct 19 16:11:09.641: INFO: Pod "pod-projected-configmaps-0e609d02-ea7c-4ba4-87a8-f05fb5606d80": Phase="Pending", Reason="", readiness=false. Elapsed: 3.653691ms +Oct 19 16:11:11.645: INFO: Pod "pod-projected-configmaps-0e609d02-ea7c-4ba4-87a8-f05fb5606d80": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008126451s +STEP: Saw pod success +Oct 19 16:11:11.645: INFO: Pod "pod-projected-configmaps-0e609d02-ea7c-4ba4-87a8-f05fb5606d80" satisfied condition "Succeeded or Failed" +Oct 19 16:11:11.648: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod pod-projected-configmaps-0e609d02-ea7c-4ba4-87a8-f05fb5606d80 container agnhost-container: +STEP: delete the pod +Oct 19 16:11:11.663: INFO: Waiting for pod pod-projected-configmaps-0e609d02-ea7c-4ba4-87a8-f05fb5606d80 to disappear +Oct 19 16:11:11.666: INFO: Pod pod-projected-configmaps-0e609d02-ea7c-4ba4-87a8-f05fb5606d80 no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:11:11.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-368" for this suite. +•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":346,"completed":82,"skipped":1650,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-node] Probing container + should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:11:11.675: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-probe +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-7016 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 +[It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating pod liveness-1be76eb0-a256-488e-a448-610fb49a9146 in namespace container-probe-7016 +Oct 19 16:11:13.829: INFO: Started pod liveness-1be76eb0-a256-488e-a448-610fb49a9146 in namespace container-probe-7016 +STEP: checking the pod's current state and verifying that restartCount is present +Oct 19 16:11:13.832: INFO: Initial restart count of pod liveness-1be76eb0-a256-488e-a448-610fb49a9146 is 0 +STEP: deleting the pod +[AfterEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:15:14.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-7016" for this suite. +•{"msg":"PASSED [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":346,"completed":83,"skipped":1660,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Container Runtime blackbox test on terminated container + should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:15:14.450: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-runtime +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-7818 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the container +STEP: wait for the container to reach Succeeded +STEP: get the container status +STEP: the container should be terminated +STEP: the termination message should be set +Oct 19 16:15:16.622: INFO: Expected: &{} to match Container's Termination Message: -- +STEP: delete the container +[AfterEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:15:16.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-runtime-7818" for this suite. +•{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":346,"completed":84,"skipped":1701,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Deployment + deployment should support proportional scaling [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:15:16.640: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename deployment +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in deployment-8070 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 +[It] deployment should support proportional scaling [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 19 16:15:16.779: INFO: Creating deployment "webserver-deployment" +Oct 19 16:15:16.783: INFO: Waiting for observed generation 1 +Oct 19 16:15:18.792: INFO: Waiting for all required pods to come up +Oct 19 16:15:18.809: INFO: Pod name httpd: Found 10 pods out of 10 +STEP: ensuring each pod is running +Oct 19 16:15:18.809: INFO: Waiting for deployment "webserver-deployment" to complete +Oct 19 16:15:18.817: INFO: Updating deployment "webserver-deployment" with a non-existent image +Oct 19 16:15:18.825: INFO: Updating deployment webserver-deployment +Oct 19 16:15:18.825: INFO: Waiting for observed generation 2 +Oct 19 16:15:20.833: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 +Oct 19 16:15:20.836: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 +Oct 19 16:15:20.839: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas +Oct 19 16:15:20.848: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 +Oct 19 16:15:20.848: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 +Oct 19 16:15:20.851: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas +Oct 19 16:15:20.856: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas +Oct 19 16:15:20.856: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 +Oct 19 16:15:20.864: INFO: Updating deployment webserver-deployment +Oct 19 16:15:20.864: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas +Oct 19 16:15:20.872: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 +Oct 19 16:15:22.879: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 +Oct 19 16:15:22.885: INFO: Deployment "webserver-deployment": +&Deployment{ObjectMeta:{webserver-deployment deployment-8070 ea1c7b1a-7110-486c-b47c-1a3114865a71 13646 3 2021-10-19 16:15:16 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-10-19 16:15:16 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-19 16:15:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002690b38 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:9,UnavailableReplicas:24,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2021-10-19 16:15:20 +0000 UTC,LastTransitionTime:2021-10-19 16:15:20 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-795d758f88" is progressing.,LastUpdateTime:2021-10-19 16:15:22 +0000 UTC,LastTransitionTime:2021-10-19 16:15:16 +0000 UTC,},},ReadyReplicas:9,CollisionCount:nil,},} + +Oct 19 16:15:22.889: INFO: New ReplicaSet "webserver-deployment-795d758f88" of Deployment "webserver-deployment": +&ReplicaSet{ObjectMeta:{webserver-deployment-795d758f88 deployment-8070 fe28459c-6d5f-478a-86d5-03f09b170ae8 13603 3 2021-10-19 16:15:18 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment ea1c7b1a-7110-486c-b47c-1a3114865a71 0xc002690f37 0xc002690f38}] [] [{kube-controller-manager Update apps/v1 2021-10-19 16:15:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ea1c7b1a-7110-486c-b47c-1a3114865a71\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-19 16:15:18 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 795d758f88,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002690fd8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Oct 19 16:15:22.889: INFO: All old ReplicaSets of Deployment "webserver-deployment": +Oct 19 16:15:22.889: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-847dcfb7fb deployment-8070 797c3ade-b8c8-4c9c-a977-86a6f9dce2ef 13645 3 2021-10-19 16:15:16 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment ea1c7b1a-7110-486c-b47c-1a3114865a71 0xc002691037 0xc002691038}] [] [{kube-controller-manager Update apps/v1 2021-10-19 16:15:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ea1c7b1a-7110-486c-b47c-1a3114865a71\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-19 16:15:18 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 847dcfb7fb,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0026910c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:9,AvailableReplicas:9,Conditions:[]ReplicaSetCondition{},},} +Oct 19 16:15:22.901: INFO: Pod "webserver-deployment-795d758f88-2lw7l" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-2lw7l webserver-deployment-795d758f88- deployment-8070 49345f3d-8aea-4491-a0ba-eabe1c7a0352 13606 0 2021-10-19 16:15:18 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[cni.projectcalico.org/podIP:100.96.0.108/32 cni.projectcalico.org/podIPs:100.96.0.108/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 fe28459c-6d5f-478a-86d5-03f09b170ae8 0xc00247b937 0xc00247b938}] [] [{kube-controller-manager Update v1 2021-10-19 16:15:18 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe28459c-6d5f-478a-86d5-03f09b170ae8\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-19 16:15:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-19 16:15:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.0.108\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-dlt4f,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmhay-ddd.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dlt4f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.1.123,PodIP:100.96.0.108,StartTime:2021-10-19 16:15:18 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.0.108,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 19 16:15:22.901: INFO: Pod "webserver-deployment-795d758f88-49mbp" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-49mbp webserver-deployment-795d758f88- deployment-8070 bb54c5aa-5181-4ca1-b9fb-df66d2bd6bc7 13611 0 2021-10-19 16:15:18 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[cni.projectcalico.org/podIP:100.96.1.46/32 cni.projectcalico.org/podIPs:100.96.1.46/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 fe28459c-6d5f-478a-86d5-03f09b170ae8 0xc00247bb97 0xc00247bb98}] [] [{kube-controller-manager Update v1 2021-10-19 16:15:18 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe28459c-6d5f-478a-86d5-03f09b170ae8\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-19 16:15:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-19 16:15:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.1.46\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-5v95c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmhay-ddd.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5v95c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmhay-ddd-worker-1-z1-67558-zh8gq,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.3.120,PodIP:100.96.1.46,StartTime:2021-10-19 16:15:18 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.1.46,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 19 16:15:22.901: INFO: Pod "webserver-deployment-795d758f88-4dmnw" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-4dmnw webserver-deployment-795d758f88- deployment-8070 2698b1f3-e6b2-49ab-88d1-d319f21c882e 13619 0 2021-10-19 16:15:20 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[cni.projectcalico.org/podIP:100.96.0.112/32 cni.projectcalico.org/podIPs:100.96.0.112/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 fe28459c-6d5f-478a-86d5-03f09b170ae8 0xc00247bde0 0xc00247bde1}] [] [{kube-controller-manager Update v1 2021-10-19 16:15:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe28459c-6d5f-478a-86d5-03f09b170ae8\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-19 16:15:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-19 16:15:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-ppb9j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmhay-ddd.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ppb9j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.1.123,PodIP:,StartTime:2021-10-19 16:15:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 19 16:15:22.901: INFO: Pod "webserver-deployment-795d758f88-4t4cw" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-4t4cw webserver-deployment-795d758f88- deployment-8070 e64d884c-f9b6-45bc-bff1-e76e695f14d7 13634 0 2021-10-19 16:15:20 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[cni.projectcalico.org/podIP:100.96.1.54/32 cni.projectcalico.org/podIPs:100.96.1.54/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 fe28459c-6d5f-478a-86d5-03f09b170ae8 0xc00089a077 0xc00089a078}] [] [{kube-controller-manager Update v1 2021-10-19 16:15:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe28459c-6d5f-478a-86d5-03f09b170ae8\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-19 16:15:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2021-10-19 16:15:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-962h6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmhay-ddd.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-962h6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmhay-ddd-worker-1-z1-67558-zh8gq,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.3.120,PodIP:,StartTime:2021-10-19 16:15:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 19 16:15:22.901: INFO: Pod "webserver-deployment-795d758f88-9qktw" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-9qktw webserver-deployment-795d758f88- deployment-8070 f5070fff-0f7c-4aa1-8157-181db46e5916 13521 0 2021-10-19 16:15:18 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[cni.projectcalico.org/podIP:100.96.0.106/32 cni.projectcalico.org/podIPs:100.96.0.106/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 fe28459c-6d5f-478a-86d5-03f09b170ae8 0xc00089a2c7 0xc00089a2c8}] [] [{kube-controller-manager Update v1 2021-10-19 16:15:18 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe28459c-6d5f-478a-86d5-03f09b170ae8\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-19 16:15:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2021-10-19 16:15:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-c2krk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmhay-ddd.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c2krk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.1.123,PodIP:,StartTime:2021-10-19 16:15:18 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 19 16:15:22.902: INFO: Pod "webserver-deployment-795d758f88-gmnfw" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-gmnfw webserver-deployment-795d758f88- deployment-8070 528c8dbd-6da8-453b-a02a-5e982645c8a8 13625 0 2021-10-19 16:15:20 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[cni.projectcalico.org/podIP:100.96.1.51/32 cni.projectcalico.org/podIPs:100.96.1.51/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 fe28459c-6d5f-478a-86d5-03f09b170ae8 0xc00089a4e7 0xc00089a4e8}] [] [{kube-controller-manager Update v1 2021-10-19 16:15:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe28459c-6d5f-478a-86d5-03f09b170ae8\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-19 16:15:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2021-10-19 16:15:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-kthcs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmhay-ddd.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kthcs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmhay-ddd-worker-1-z1-67558-zh8gq,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.3.120,PodIP:,StartTime:2021-10-19 16:15:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 19 16:15:22.902: INFO: Pod "webserver-deployment-795d758f88-gzkmk" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-gzkmk webserver-deployment-795d758f88- deployment-8070 9bede543-005e-4dad-b9b5-e614924a1af0 13613 0 2021-10-19 16:15:20 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[cni.projectcalico.org/podIP:100.96.1.48/32 cni.projectcalico.org/podIPs:100.96.1.48/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 fe28459c-6d5f-478a-86d5-03f09b170ae8 0xc00089a727 0xc00089a728}] [] [{kube-controller-manager Update v1 2021-10-19 16:15:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe28459c-6d5f-478a-86d5-03f09b170ae8\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-19 16:15:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2021-10-19 16:15:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-pgmjj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmhay-ddd.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pgmjj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmhay-ddd-worker-1-z1-67558-zh8gq,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.3.120,PodIP:,StartTime:2021-10-19 16:15:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 19 16:15:22.902: INFO: Pod "webserver-deployment-795d758f88-ntsrc" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-ntsrc webserver-deployment-795d758f88- deployment-8070 4b1e7fba-689e-4a77-b3ff-68360bcc70f9 13626 0 2021-10-19 16:15:20 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[cni.projectcalico.org/podIP:100.96.0.114/32 cni.projectcalico.org/podIPs:100.96.0.114/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 fe28459c-6d5f-478a-86d5-03f09b170ae8 0xc00089a957 0xc00089a958}] [] [{kube-controller-manager Update v1 2021-10-19 16:15:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe28459c-6d5f-478a-86d5-03f09b170ae8\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-19 16:15:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-19 16:15:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-6hrm6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmhay-ddd.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6hrm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.1.123,PodIP:,StartTime:2021-10-19 16:15:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 19 16:15:22.902: INFO: Pod "webserver-deployment-795d758f88-nxb2n" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-nxb2n webserver-deployment-795d758f88- deployment-8070 51f8340b-dd6d-455b-80af-408c2828876f 13617 0 2021-10-19 16:15:20 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[cni.projectcalico.org/podIP:100.96.0.111/32 cni.projectcalico.org/podIPs:100.96.0.111/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 fe28459c-6d5f-478a-86d5-03f09b170ae8 0xc00089ab87 0xc00089ab88}] [] [{kube-controller-manager Update v1 2021-10-19 16:15:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe28459c-6d5f-478a-86d5-03f09b170ae8\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-19 16:15:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2021-10-19 16:15:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-xtz87,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmhay-ddd.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xtz87,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.1.123,PodIP:,StartTime:2021-10-19 16:15:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 19 16:15:22.902: INFO: Pod "webserver-deployment-795d758f88-rpkwl" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-rpkwl webserver-deployment-795d758f88- deployment-8070 92f0fae1-2c08-4624-878b-650d3f97e0fc 13637 0 2021-10-19 16:15:20 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[cni.projectcalico.org/podIP:100.96.1.58/32 cni.projectcalico.org/podIPs:100.96.1.58/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 fe28459c-6d5f-478a-86d5-03f09b170ae8 0xc00089adc7 0xc00089adc8}] [] [{kube-controller-manager Update v1 2021-10-19 16:15:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe28459c-6d5f-478a-86d5-03f09b170ae8\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-19 16:15:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2021-10-19 16:15:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-njxmg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmhay-ddd.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-njxmg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmhay-ddd-worker-1-z1-67558-zh8gq,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.3.120,PodIP:,StartTime:2021-10-19 16:15:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 19 16:15:22.903: INFO: Pod "webserver-deployment-795d758f88-vjqjm" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-vjqjm webserver-deployment-795d758f88- deployment-8070 9a34b82e-a6c8-4df3-9030-952046050b4e 13627 0 2021-10-19 16:15:20 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[cni.projectcalico.org/podIP:100.96.1.52/32 cni.projectcalico.org/podIPs:100.96.1.52/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 fe28459c-6d5f-478a-86d5-03f09b170ae8 0xc00089b007 0xc00089b008}] [] [{kube-controller-manager Update v1 2021-10-19 16:15:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe28459c-6d5f-478a-86d5-03f09b170ae8\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-19 16:15:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-19 16:15:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-rrcpt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmhay-ddd.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rrcpt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmhay-ddd-worker-1-z1-67558-zh8gq,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.3.120,PodIP:,StartTime:2021-10-19 16:15:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 19 16:15:22.903: INFO: Pod "webserver-deployment-795d758f88-xm9s8" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-xm9s8 webserver-deployment-795d758f88- deployment-8070 3f2ce604-1ff3-4903-a115-f647e56ce493 13520 0 2021-10-19 16:15:18 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[cni.projectcalico.org/podIP:100.96.1.47/32 cni.projectcalico.org/podIPs:100.96.1.47/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 fe28459c-6d5f-478a-86d5-03f09b170ae8 0xc00089b277 0xc00089b278}] [] [{kube-controller-manager Update v1 2021-10-19 16:15:18 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe28459c-6d5f-478a-86d5-03f09b170ae8\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-19 16:15:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2021-10-19 16:15:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-c5kjl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmhay-ddd.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c5kjl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmhay-ddd-worker-1-z1-67558-zh8gq,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.3.120,PodIP:,StartTime:2021-10-19 16:15:18 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 19 16:15:22.903: INFO: Pod "webserver-deployment-795d758f88-xmvrq" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-xmvrq webserver-deployment-795d758f88- deployment-8070 f8d20d8e-7291-4c3c-8b7d-be8b1e13379d 13523 0 2021-10-19 16:15:18 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[cni.projectcalico.org/podIP:100.96.0.107/32 cni.projectcalico.org/podIPs:100.96.0.107/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 fe28459c-6d5f-478a-86d5-03f09b170ae8 0xc00089b497 0xc00089b498}] [] [{kube-controller-manager Update v1 2021-10-19 16:15:18 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe28459c-6d5f-478a-86d5-03f09b170ae8\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-19 16:15:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2021-10-19 16:15:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-7smmk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmhay-ddd.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7smmk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.1.123,PodIP:,StartTime:2021-10-19 16:15:18 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 19 16:15:22.903: INFO: Pod "webserver-deployment-847dcfb7fb-2dwqn" is available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-2dwqn webserver-deployment-847dcfb7fb- deployment-8070 1ee8dfbf-0c3c-4a4a-a01b-ac82e43a55dd 13473 0 2021-10-19 16:15:16 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/podIP:100.96.1.42/32 cni.projectcalico.org/podIPs:100.96.1.42/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 797c3ade-b8c8-4c9c-a977-86a6f9dce2ef 0xc00089b6b7 0xc00089b6b8}] [] [{kube-controller-manager Update v1 2021-10-19 16:15:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"797c3ade-b8c8-4c9c-a977-86a6f9dce2ef\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-19 16:15:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-19 16:15:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.1.42\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-bnr9q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmhay-ddd.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bnr9q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmhay-ddd-worker-1-z1-67558-zh8gq,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.3.120,PodIP:100.96.1.42,StartTime:2021-10-19 16:15:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-19 16:15:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://4a2ed67f64378333b490679c75042430a92bc8713a068d007d40ef77ddc37c03,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.1.42,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 19 16:15:22.903: INFO: Pod "webserver-deployment-847dcfb7fb-2g2g5" is available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-2g2g5 webserver-deployment-847dcfb7fb- deployment-8070 3fa0de4f-d29c-4c04-b017-7d7c0722b3d4 13476 0 2021-10-19 16:15:16 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/podIP:100.96.1.43/32 cni.projectcalico.org/podIPs:100.96.1.43/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 797c3ade-b8c8-4c9c-a977-86a6f9dce2ef 0xc00089b8d0 0xc00089b8d1}] [] [{kube-controller-manager Update v1 2021-10-19 16:15:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"797c3ade-b8c8-4c9c-a977-86a6f9dce2ef\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-19 16:15:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-19 16:15:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.1.43\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-nz45h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmhay-ddd.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nz45h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmhay-ddd-worker-1-z1-67558-zh8gq,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.3.120,PodIP:100.96.1.43,StartTime:2021-10-19 16:15:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-19 16:15:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://be30c01064aee952371761e14c98110ec26a6de873f91b63e24fc63c36e4e194,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.1.43,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 19 16:15:22.903: INFO: Pod "webserver-deployment-847dcfb7fb-6vvqm" is not available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-6vvqm webserver-deployment-847dcfb7fb- deployment-8070 631e45b9-1a0e-4427-a6f2-4ab2c9508067 13618 0 2021-10-19 16:15:20 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/podIP:100.96.1.50/32 cni.projectcalico.org/podIPs:100.96.1.50/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 797c3ade-b8c8-4c9c-a977-86a6f9dce2ef 0xc00089bb00 0xc00089bb01}] [] [{kube-controller-manager Update v1 2021-10-19 16:15:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"797c3ade-b8c8-4c9c-a977-86a6f9dce2ef\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-19 16:15:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2021-10-19 16:15:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-r78t2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmhay-ddd.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r78t2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmhay-ddd-worker-1-z1-67558-zh8gq,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.3.120,PodIP:,StartTime:2021-10-19 16:15:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 19 16:15:22.904: INFO: Pod "webserver-deployment-847dcfb7fb-6z6xx" is available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-6z6xx webserver-deployment-847dcfb7fb- deployment-8070 9fe900ae-76fb-4f7e-8e89-1ed6749a96bc 13644 0 2021-10-19 16:15:20 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/podIP:100.96.1.49/32 cni.projectcalico.org/podIPs:100.96.1.49/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 797c3ade-b8c8-4c9c-a977-86a6f9dce2ef 0xc00089bee7 0xc00089bee8}] [] [{kube-controller-manager Update v1 2021-10-19 16:15:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"797c3ade-b8c8-4c9c-a977-86a6f9dce2ef\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-19 16:15:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-19 16:15:22 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.1.49\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-6m8sb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmhay-ddd.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6m8sb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmhay-ddd-worker-1-z1-67558-zh8gq,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.3.120,PodIP:100.96.1.49,StartTime:2021-10-19 16:15:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-19 16:15:22 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://341b3bfc1c65ee4917e1fcc3d95e5fc09566fd2c6091e3402bdf7255f5634a73,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.1.49,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 19 16:15:22.904: INFO: Pod "webserver-deployment-847dcfb7fb-72dw2" is not available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-72dw2 webserver-deployment-847dcfb7fb- deployment-8070 30cf56d0-0eb9-4846-b22e-89dc10a0469b 13642 0 2021-10-19 16:15:20 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/podIP:100.96.1.57/32 cni.projectcalico.org/podIPs:100.96.1.57/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 797c3ade-b8c8-4c9c-a977-86a6f9dce2ef 0xc003bf21c0 0xc003bf21c1}] [] [{kube-controller-manager Update v1 2021-10-19 16:15:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"797c3ade-b8c8-4c9c-a977-86a6f9dce2ef\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-19 16:15:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2021-10-19 16:15:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-j7lhr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmhay-ddd.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j7lhr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmhay-ddd-worker-1-z1-67558-zh8gq,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.3.120,PodIP:,StartTime:2021-10-19 16:15:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 19 16:15:22.904: INFO: Pod "webserver-deployment-847dcfb7fb-7vv4w" is not available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-7vv4w webserver-deployment-847dcfb7fb- deployment-8070 0db3399e-66ab-49a9-987a-0273aab8cce1 13632 0 2021-10-19 16:15:20 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/podIP:100.96.0.117/32 cni.projectcalico.org/podIPs:100.96.0.117/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 797c3ade-b8c8-4c9c-a977-86a6f9dce2ef 0xc003bf23c7 0xc003bf23c8}] [] [{kube-controller-manager Update v1 2021-10-19 16:15:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"797c3ade-b8c8-4c9c-a977-86a6f9dce2ef\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-19 16:15:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-19 16:15:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-8nw7r,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmhay-ddd.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8nw7r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.1.123,PodIP:,StartTime:2021-10-19 16:15:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 19 16:15:22.904: INFO: Pod "webserver-deployment-847dcfb7fb-8vh6f" is available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-8vh6f webserver-deployment-847dcfb7fb- deployment-8070 22c72cf9-a476-4d96-8a7d-1191e6842450 13467 0 2021-10-19 16:15:16 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/podIP:100.96.0.103/32 cni.projectcalico.org/podIPs:100.96.0.103/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 797c3ade-b8c8-4c9c-a977-86a6f9dce2ef 0xc003bf25c7 0xc003bf25c8}] [] [{kube-controller-manager Update v1 2021-10-19 16:15:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"797c3ade-b8c8-4c9c-a977-86a6f9dce2ef\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-19 16:15:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-19 16:15:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.0.103\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-sbwps,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmhay-ddd.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sbwps,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.1.123,PodIP:100.96.0.103,StartTime:2021-10-19 16:15:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-19 16:15:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://92ac06543ae06c42a1a49200c5c9152f76162a2a7338248f1c162e9a21943342,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.0.103,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 19 16:15:22.904: INFO: Pod "webserver-deployment-847dcfb7fb-c254z" is available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-c254z webserver-deployment-847dcfb7fb- deployment-8070 8ab6b16d-e69b-4e0c-a312-87dfe7637773 13470 0 2021-10-19 16:15:16 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/podIP:100.96.1.44/32 cni.projectcalico.org/podIPs:100.96.1.44/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 797c3ade-b8c8-4c9c-a977-86a6f9dce2ef 0xc003bf27f7 0xc003bf27f8}] [] [{kube-controller-manager Update v1 2021-10-19 16:15:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"797c3ade-b8c8-4c9c-a977-86a6f9dce2ef\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-19 16:15:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-19 16:15:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.1.44\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-dmrnk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmhay-ddd.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dmrnk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmhay-ddd-worker-1-z1-67558-zh8gq,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.3.120,PodIP:100.96.1.44,StartTime:2021-10-19 16:15:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-19 16:15:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://47285d7430da378ab49ed949f11e26f91d6c08fa7c872683e2f08cda4c0d7da2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.1.44,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 19 16:15:22.904: INFO: Pod "webserver-deployment-847dcfb7fb-cj9c7" is not available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-cj9c7 webserver-deployment-847dcfb7fb- deployment-8070 c2da65ef-0bc2-4679-b0f3-298a776d6570 13635 0 2021-10-19 16:15:20 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/podIP:100.96.1.55/32 cni.projectcalico.org/podIPs:100.96.1.55/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 797c3ade-b8c8-4c9c-a977-86a6f9dce2ef 0xc003bf2a30 0xc003bf2a31}] [] [{kube-controller-manager Update v1 2021-10-19 16:15:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"797c3ade-b8c8-4c9c-a977-86a6f9dce2ef\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-19 16:15:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2021-10-19 16:15:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-czwn2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmhay-ddd.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-czwn2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmhay-ddd-worker-1-z1-67558-zh8gq,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.3.120,PodIP:,StartTime:2021-10-19 16:15:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 19 16:15:22.905: INFO: Pod "webserver-deployment-847dcfb7fb-cvsx7" is not available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-cvsx7 webserver-deployment-847dcfb7fb- deployment-8070 9e7d53e3-a0cc-40cd-bac0-8f751432b57d 13612 0 2021-10-19 16:15:20 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/podIP:100.96.0.109/32 cni.projectcalico.org/podIPs:100.96.0.109/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 797c3ade-b8c8-4c9c-a977-86a6f9dce2ef 0xc003bf2c37 0xc003bf2c38}] [] [{kube-controller-manager Update v1 2021-10-19 16:15:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"797c3ade-b8c8-4c9c-a977-86a6f9dce2ef\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-19 16:15:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2021-10-19 16:15:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-xzgc9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmhay-ddd.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xzgc9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.1.123,PodIP:,StartTime:2021-10-19 16:15:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 19 16:15:22.905: INFO: Pod "webserver-deployment-847dcfb7fb-l6txt" is not available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-l6txt webserver-deployment-847dcfb7fb- deployment-8070 847559ee-e61c-4268-a59d-c3b4fa2bb4bd 13631 0 2021-10-19 16:15:20 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/podIP:100.96.0.116/32 cni.projectcalico.org/podIPs:100.96.0.116/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 797c3ade-b8c8-4c9c-a977-86a6f9dce2ef 0xc003bf2e57 0xc003bf2e58}] [] [{kube-controller-manager Update v1 2021-10-19 16:15:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"797c3ade-b8c8-4c9c-a977-86a6f9dce2ef\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-19 16:15:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-19 16:15:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-t8pch,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmhay-ddd.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t8pch,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.1.123,PodIP:,StartTime:2021-10-19 16:15:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 19 16:15:22.905: INFO: Pod "webserver-deployment-847dcfb7fb-lxt7r" is not available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-lxt7r webserver-deployment-847dcfb7fb- deployment-8070 f8c4f00c-87b4-4c71-8a68-1221d99b6d14 13622 0 2021-10-19 16:15:20 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/podIP:100.96.0.113/32 cni.projectcalico.org/podIPs:100.96.0.113/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 797c3ade-b8c8-4c9c-a977-86a6f9dce2ef 0xc003bf3057 0xc003bf3058}] [] [{kube-controller-manager Update v1 2021-10-19 16:15:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"797c3ade-b8c8-4c9c-a977-86a6f9dce2ef\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-19 16:15:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-19 16:15:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-4k9zm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmhay-ddd.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4k9zm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.1.123,PodIP:,StartTime:2021-10-19 16:15:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 19 16:15:22.905: INFO: Pod "webserver-deployment-847dcfb7fb-mfzcn" is available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-mfzcn webserver-deployment-847dcfb7fb- deployment-8070 e6bd24ce-2251-43c3-ae0c-860d2a66c8b8 13479 0 2021-10-19 16:15:16 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/podIP:100.96.1.45/32 cni.projectcalico.org/podIPs:100.96.1.45/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 797c3ade-b8c8-4c9c-a977-86a6f9dce2ef 0xc003bf3257 0xc003bf3258}] [] [{kube-controller-manager Update v1 2021-10-19 16:15:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"797c3ade-b8c8-4c9c-a977-86a6f9dce2ef\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-19 16:15:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-19 16:15:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.1.45\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-rncsl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmhay-ddd.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rncsl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmhay-ddd-worker-1-z1-67558-zh8gq,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.3.120,PodIP:100.96.1.45,StartTime:2021-10-19 16:15:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-19 16:15:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://77a496b0ad805dfd9a0c30cbdfcc01f2e817471821f72452b8ff52c2786b0853,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.1.45,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 19 16:15:22.905: INFO: Pod "webserver-deployment-847dcfb7fb-n8vsx" is not available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-n8vsx webserver-deployment-847dcfb7fb- deployment-8070 cd1a4cfa-774a-438e-b3bc-64e5d49b8f0d 13614 0 2021-10-19 16:15:20 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/podIP:100.96.0.110/32 cni.projectcalico.org/podIPs:100.96.0.110/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 797c3ade-b8c8-4c9c-a977-86a6f9dce2ef 0xc003bf3470 0xc003bf3471}] [] [{kube-controller-manager Update v1 2021-10-19 16:15:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"797c3ade-b8c8-4c9c-a977-86a6f9dce2ef\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-19 16:15:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2021-10-19 16:15:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-d8cns,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmhay-ddd.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-d8cns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.1.123,PodIP:,StartTime:2021-10-19 16:15:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 19 16:15:22.905: INFO: Pod "webserver-deployment-847dcfb7fb-r6sh6" is not available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-r6sh6 webserver-deployment-847dcfb7fb- deployment-8070 579d9e5b-e0fc-4c28-b894-2c73cc497763 13633 0 2021-10-19 16:15:20 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/podIP:100.96.1.53/32 cni.projectcalico.org/podIPs:100.96.1.53/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 797c3ade-b8c8-4c9c-a977-86a6f9dce2ef 0xc003bf3667 0xc003bf3668}] [] [{kube-controller-manager Update v1 2021-10-19 16:15:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"797c3ade-b8c8-4c9c-a977-86a6f9dce2ef\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-19 16:15:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-19 16:15:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-hdhnj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmhay-ddd.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hdhnj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmhay-ddd-worker-1-z1-67558-zh8gq,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.3.120,PodIP:,StartTime:2021-10-19 16:15:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 19 16:15:22.905: INFO: Pod "webserver-deployment-847dcfb7fb-s4vbh" is available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-s4vbh webserver-deployment-847dcfb7fb- deployment-8070 b8cb56f8-19bf-47ca-a932-a6788ce6b5e6 13464 0 2021-10-19 16:15:16 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/podIP:100.96.0.102/32 cni.projectcalico.org/podIPs:100.96.0.102/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 797c3ade-b8c8-4c9c-a977-86a6f9dce2ef 0xc003bf3867 0xc003bf3868}] [] [{kube-controller-manager Update v1 2021-10-19 16:15:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"797c3ade-b8c8-4c9c-a977-86a6f9dce2ef\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-19 16:15:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-19 16:15:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.0.102\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-cjcnj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmhay-ddd.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cjcnj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.1.123,PodIP:100.96.0.102,StartTime:2021-10-19 16:15:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-19 16:15:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://b9e944adf79e92bacc8e4cdd777fd878183ff8f2713cfeda8b3af212ef474a96,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.0.102,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 19 16:15:22.906: INFO: Pod "webserver-deployment-847dcfb7fb-s8kh4" is not available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-s8kh4 webserver-deployment-847dcfb7fb- deployment-8070 7302d6b3-9e7c-4282-8d11-d50b8d77d83d 13630 0 2021-10-19 16:15:20 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/podIP:100.96.0.115/32 cni.projectcalico.org/podIPs:100.96.0.115/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 797c3ade-b8c8-4c9c-a977-86a6f9dce2ef 0xc003bf3a87 0xc003bf3a88}] [] [{kube-controller-manager Update v1 2021-10-19 16:15:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"797c3ade-b8c8-4c9c-a977-86a6f9dce2ef\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-19 16:15:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-19 16:15:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-cd76t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmhay-ddd.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cd76t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.1.123,PodIP:,StartTime:2021-10-19 16:15:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 19 16:15:22.906: INFO: Pod "webserver-deployment-847dcfb7fb-sbxh6" is available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-sbxh6 webserver-deployment-847dcfb7fb- deployment-8070 29cdce43-30fe-4322-894b-eb550e060c2c 13458 0 2021-10-19 16:15:16 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/podIP:100.96.0.100/32 cni.projectcalico.org/podIPs:100.96.0.100/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 797c3ade-b8c8-4c9c-a977-86a6f9dce2ef 0xc003bf3c87 0xc003bf3c88}] [] [{kube-controller-manager Update v1 2021-10-19 16:15:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"797c3ade-b8c8-4c9c-a977-86a6f9dce2ef\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-19 16:15:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-19 16:15:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.0.100\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-fhxcr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmhay-ddd.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fhxcr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.1.123,PodIP:100.96.0.100,StartTime:2021-10-19 16:15:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-19 16:15:18 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://89aedcf86d4ec09cacff12d1635cc40881c2f5e918b48a1ab5f6e10dd4266f40,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.0.100,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 19 16:15:22.906: INFO: Pod "webserver-deployment-847dcfb7fb-v277m" is available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-v277m webserver-deployment-847dcfb7fb- deployment-8070 e9c59f1d-601f-4a21-9ecc-fa54fd7f4fe7 13461 0 2021-10-19 16:15:16 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/podIP:100.96.0.105/32 cni.projectcalico.org/podIPs:100.96.0.105/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 797c3ade-b8c8-4c9c-a977-86a6f9dce2ef 0xc003bf3ea7 0xc003bf3ea8}] [] [{kube-controller-manager Update v1 2021-10-19 16:15:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"797c3ade-b8c8-4c9c-a977-86a6f9dce2ef\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-19 16:15:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-19 16:15:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.0.105\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-xqp8c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmhay-ddd.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xqp8c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.1.123,PodIP:100.96.0.105,StartTime:2021-10-19 16:15:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-19 16:15:18 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://9bde552a63756a86f63bbe43bb02e8da42f94fc130a357082d4837b56a3169af,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.0.105,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 19 16:15:22.906: INFO: Pod "webserver-deployment-847dcfb7fb-vnlgx" is not available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-vnlgx webserver-deployment-847dcfb7fb- deployment-8070 9c70b89d-922b-465e-ab1c-e75045960799 13636 0 2021-10-19 16:15:20 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/podIP:100.96.1.56/32 cni.projectcalico.org/podIPs:100.96.1.56/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 797c3ade-b8c8-4c9c-a977-86a6f9dce2ef 0xc0041c40c7 0xc0041c40c8}] [] [{kube-controller-manager Update v1 2021-10-19 16:15:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"797c3ade-b8c8-4c9c-a977-86a6f9dce2ef\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-19 16:15:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2021-10-19 16:15:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-9c7f4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmhay-ddd.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9c7f4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmhay-ddd-worker-1-z1-67558-zh8gq,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:15:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.3.120,PodIP:,StartTime:2021-10-19 16:15:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:15:22.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-8070" for this suite. +•{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":346,"completed":85,"skipped":1736,"failed":0} + +------------------------------ +[sig-node] RuntimeClass + should support RuntimeClasses API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] RuntimeClass + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:15:22.914: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename runtimeclass +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in runtimeclass-564 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support RuntimeClasses API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: getting /apis +STEP: getting /apis/node.k8s.io +STEP: getting /apis/node.k8s.io/v1 +STEP: creating +STEP: watching +Oct 19 16:15:23.068: INFO: starting watch +STEP: getting +STEP: listing +STEP: patching +STEP: updating +Oct 19 16:15:23.086: INFO: waiting for watch events with expected annotations +STEP: deleting +STEP: deleting a collection +[AfterEach] [sig-node] RuntimeClass + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:15:23.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "runtimeclass-564" for this suite. +•{"msg":"PASSED [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance]","total":346,"completed":86,"skipped":1736,"failed":0} +SSS +------------------------------ +[sig-network] Services + should be able to change the type from ExternalName to ClusterIP [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:15:23.109: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-5755 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should be able to change the type from ExternalName to ClusterIP [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a service externalname-service with the type=ExternalName in namespace services-5755 +STEP: changing the ExternalName service to type=ClusterIP +STEP: creating replication controller externalname-service in namespace services-5755 +I1019 16:15:23.265524 4339 runners.go:190] Created replication controller with name: externalname-service, namespace: services-5755, replica count: 2 +I1019 16:15:26.317331 4339 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Oct 19 16:15:26.317: INFO: Creating new exec pod +Oct 19 16:15:29.333: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-5755 exec execpodtbnnh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' +Oct 19 16:15:29.753: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" +Oct 19 16:15:29.753: INFO: stdout: "externalname-service-6gjlv" +Oct 19 16:15:29.753: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-5755 exec execpodtbnnh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.71.41.174 80' +Oct 19 16:15:29.942: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 100.71.41.174 80\nConnection to 100.71.41.174 80 port [tcp/http] succeeded!\n" +Oct 19 16:15:29.942: INFO: stdout: "" +Oct 19 16:15:30.943: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-5755 exec execpodtbnnh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.71.41.174 80' +Oct 19 16:15:31.146: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 100.71.41.174 80\nConnection to 100.71.41.174 80 port [tcp/http] succeeded!\n" +Oct 19 16:15:31.146: INFO: stdout: "" +Oct 19 16:15:31.942: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-5755 exec execpodtbnnh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.71.41.174 80' +Oct 19 16:15:32.165: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 100.71.41.174 80\nConnection to 100.71.41.174 80 port [tcp/http] succeeded!\n" +Oct 19 16:15:32.165: INFO: stdout: "externalname-service-6gjlv" +Oct 19 16:15:32.165: INFO: Cleaning up the ExternalName to ClusterIP test service +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:15:32.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-5755" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":346,"completed":87,"skipped":1739,"failed":0} +SS +------------------------------ +[sig-network] IngressClass API + should support creating IngressClass API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] IngressClass API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:15:32.183: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename ingressclass +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in ingressclass-7878 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] IngressClass API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingressclass.go:148 +[It] should support creating IngressClass API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: getting /apis +STEP: getting /apis/networking.k8s.io +STEP: getting /apis/networking.k8s.iov1 +STEP: creating +STEP: getting +STEP: listing +STEP: watching +Oct 19 16:15:32.340: INFO: starting watch +STEP: patching +STEP: updating +Oct 19 16:15:32.349: INFO: waiting for watch events with expected annotations +Oct 19 16:15:32.349: INFO: saw patched and updated annotations +STEP: deleting +STEP: deleting a collection +[AfterEach] [sig-network] IngressClass API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:15:32.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "ingressclass-7878" for this suite. +•{"msg":"PASSED [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]","total":346,"completed":88,"skipped":1741,"failed":0} + +------------------------------ +[sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces + should list and delete a collection of PodDisruptionBudgets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:15:32.380: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename disruption +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in disruption-1680 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 +[BeforeEach] Listing PodDisruptionBudgets for all namespaces + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:15:32.516: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename disruption-2 +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in disruption-2-5668 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should list and delete a collection of PodDisruptionBudgets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Waiting for the pdb to be processed +STEP: Waiting for the pdb to be processed +STEP: Waiting for the pdb to be processed +STEP: listing a collection of PDBs across all namespaces +STEP: listing a collection of PDBs in namespace disruption-1680 +STEP: deleting a collection of PDBs +STEP: Waiting for the PDB collection to be deleted +[AfterEach] Listing PodDisruptionBudgets for all namespaces + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:15:32.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "disruption-2-5668" for this suite. +[AfterEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:15:32.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "disruption-1680" for this suite. +•{"msg":"PASSED [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]","total":346,"completed":89,"skipped":1741,"failed":0} +SSSSSS +------------------------------ +[sig-apps] ReplicationController + should release no longer matching pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:15:32.705: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename replication-controller +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replication-controller-611 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 +[It] should release no longer matching pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Given a ReplicationController is created +STEP: When the matched label of one of its pods change +Oct 19 16:15:32.846: INFO: Pod name pod-release: Found 0 pods out of 1 +Oct 19 16:15:37.873: INFO: Pod name pod-release: Found 1 pods out of 1 +STEP: Then the pod is released +[AfterEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:15:38.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replication-controller-611" for this suite. +•{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":346,"completed":90,"skipped":1747,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should list and delete a collection of DaemonSets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:15:38.900: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename daemonsets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in daemonsets-9348 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:142 +[It] should list and delete a collection of DaemonSets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating simple DaemonSet "daemon-set" +STEP: Check that daemon pods launch on every node of the cluster. +Oct 19 16:15:39.191: INFO: Number of nodes with available pods: 0 +Oct 19 16:15:39.191: INFO: Node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 is running more than one daemon pod +Oct 19 16:15:40.202: INFO: Number of nodes with available pods: 0 +Oct 19 16:15:40.202: INFO: Node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 is running more than one daemon pod +Oct 19 16:15:41.201: INFO: Number of nodes with available pods: 2 +Oct 19 16:15:41.201: INFO: Number of running nodes: 2, number of available pods: 2 +STEP: listing all DeamonSets +STEP: DeleteCollection of the DaemonSets +STEP: Verify that ReplicaSets have been deleted +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:108 +Oct 19 16:15:41.223: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"14090"},"items":null} + +Oct 19 16:15:41.226: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"14090"},"items":[{"metadata":{"name":"daemon-set-78gff","generateName":"daemon-set-","namespace":"daemonsets-9348","uid":"cffbd129-ce51-4403-a154-57f634358af5","resourceVersion":"14090","creationTimestamp":"2021-10-19T16:15:39Z","deletionTimestamp":"2021-10-19T16:16:11Z","deletionGracePeriodSeconds":30,"labels":{"controller-revision-hash":"577749b6b","daemonset-name":"daemon-set","pod-template-generation":"1"},"annotations":{"cni.projectcalico.org/podIP":"100.96.1.60/32","cni.projectcalico.org/podIPs":"100.96.1.60/32","kubernetes.io/psp":"e2e-test-privileged-psp"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"6f7b1b4c-f700-41b7-a5df-bfc3367aba87","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"calico","operation":"Update","apiVersion":"v1","time":"2021-10-19T16:15:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}},"subresource":"status"},{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-10-19T16:15:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6f7b1b4c-f700-41b7-a5df-bfc3367aba87\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-10-19T16:15:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.1.60\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-fb6tg","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1","ports":[{"containerPort":9376,"protocol":"TCP"}],"env":[{"name":"KUBERNETES_SERVICE_HOST","value":"api.tmhay-ddd.it.internal.staging.k8s.ondemand.com"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-fb6tg","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"shoot--it--tmhay-ddd-worker-1-z1-67558-zh8gq","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["shoot--it--tmhay-ddd-worker-1-z1-67558-zh8gq"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-10-19T16:15:39Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-10-19T16:15:40Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-10-19T16:15:40Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-10-19T16:15:39Z"}],"hostIP":"10.250.3.120","podIP":"100.96.1.60","podIPs":[{"ip":"100.96.1.60"}],"startTime":"2021-10-19T16:15:39Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2021-10-19T16:15:39Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1","imageID":"k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50","containerID":"containerd://12926fe5edb20854216836b8ae92bd867f553534bb566190a891795890d4dc87","started":true}],"qosClass":"BestEffort"}},{"metadata":{"name":"daemon-set-lwrbw","generateName":"daemon-set-","namespace":"daemonsets-9348","uid":"4c0f7ddb-c1a5-4ac0-b3de-db73ea5197fd","resourceVersion":"14089","creationTimestamp":"2021-10-19T16:15:39Z","deletionTimestamp":"2021-10-19T16:16:11Z","deletionGracePeriodSeconds":30,"labels":{"controller-revision-hash":"577749b6b","daemonset-name":"daemon-set","pod-template-generation":"1"},"annotations":{"cni.projectcalico.org/podIP":"100.96.0.122/32","cni.projectcalico.org/podIPs":"100.96.0.122/32","kubernetes.io/psp":"e2e-test-privileged-psp"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"6f7b1b4c-f700-41b7-a5df-bfc3367aba87","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"calico","operation":"Update","apiVersion":"v1","time":"2021-10-19T16:15:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}},"subresource":"status"},{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-10-19T16:15:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6f7b1b4c-f700-41b7-a5df-bfc3367aba87\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-10-19T16:15:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.0.122\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-hrtdm","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1","ports":[{"containerPort":9376,"protocol":"TCP"}],"env":[{"name":"KUBERNETES_SERVICE_HOST","value":"api.tmhay-ddd.it.internal.staging.k8s.ondemand.com"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-hrtdm","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-10-19T16:15:39Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-10-19T16:15:40Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-10-19T16:15:40Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-10-19T16:15:39Z"}],"hostIP":"10.250.1.123","podIP":"100.96.0.122","podIPs":[{"ip":"100.96.0.122"}],"startTime":"2021-10-19T16:15:39Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2021-10-19T16:15:39Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1","imageID":"k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50","containerID":"containerd://3c6c2a3e64489c71aee7847436bec06488f38b819acd08fc52799568c16ab184","started":true}],"qosClass":"BestEffort"}}]} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:15:41.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-9348" for this suite. +•{"msg":"PASSED [sig-apps] Daemon set [Serial] should list and delete a collection of DaemonSets [Conformance]","total":346,"completed":91,"skipped":1774,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-auth] ServiceAccounts + should mount projected service account token [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:15:41.244: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename svcaccounts +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in svcaccounts-6479 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should mount projected service account token [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test service account token: +Oct 19 16:15:41.393: INFO: Waiting up to 5m0s for pod "test-pod-1a381834-48f5-4a00-9856-343ddb3d2c8b" in namespace "svcaccounts-6479" to be "Succeeded or Failed" +Oct 19 16:15:41.396: INFO: Pod "test-pod-1a381834-48f5-4a00-9856-343ddb3d2c8b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.981361ms +Oct 19 16:15:43.399: INFO: Pod "test-pod-1a381834-48f5-4a00-9856-343ddb3d2c8b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006399381s +STEP: Saw pod success +Oct 19 16:15:43.399: INFO: Pod "test-pod-1a381834-48f5-4a00-9856-343ddb3d2c8b" satisfied condition "Succeeded or Failed" +Oct 19 16:15:43.403: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod test-pod-1a381834-48f5-4a00-9856-343ddb3d2c8b container agnhost-container: +STEP: delete the pod +Oct 19 16:15:43.420: INFO: Waiting for pod test-pod-1a381834-48f5-4a00-9856-343ddb3d2c8b to disappear +Oct 19 16:15:43.423: INFO: Pod test-pod-1a381834-48f5-4a00-9856-343ddb3d2c8b no longer exists +[AfterEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:15:43.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svcaccounts-6479" for this suite. +•{"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":346,"completed":92,"skipped":1801,"failed":0} +SSSSSSSS +------------------------------ +[sig-node] Pods + should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:15:43.449: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename pods +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-3177 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:188 +[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pod +STEP: submitting the pod to kubernetes +Oct 19 16:15:43.602: INFO: The status of Pod pod-update-activedeadlineseconds-c173f2b4-383e-49c8-9a3b-c4a2c5b034c2 is Pending, waiting for it to be Running (with Ready = true) +Oct 19 16:15:45.606: INFO: The status of Pod pod-update-activedeadlineseconds-c173f2b4-383e-49c8-9a3b-c4a2c5b034c2 is Running (Ready = true) +STEP: verifying the pod is in kubernetes +STEP: updating the pod +Oct 19 16:15:46.124: INFO: Successfully updated pod "pod-update-activedeadlineseconds-c173f2b4-383e-49c8-9a3b-c4a2c5b034c2" +Oct 19 16:15:46.125: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-c173f2b4-383e-49c8-9a3b-c4a2c5b034c2" in namespace "pods-3177" to be "terminated due to deadline exceeded" +Oct 19 16:15:46.128: INFO: Pod "pod-update-activedeadlineseconds-c173f2b4-383e-49c8-9a3b-c4a2c5b034c2": Phase="Running", Reason="", readiness=true. Elapsed: 3.076037ms +Oct 19 16:15:48.132: INFO: Pod "pod-update-activedeadlineseconds-c173f2b4-383e-49c8-9a3b-c4a2c5b034c2": Phase="Running", Reason="", readiness=true. Elapsed: 2.007624034s +Oct 19 16:15:50.137: INFO: Pod "pod-update-activedeadlineseconds-c173f2b4-383e-49c8-9a3b-c4a2c5b034c2": Phase="Failed", Reason="DeadlineExceeded", readiness=true. Elapsed: 4.012960866s +Oct 19 16:15:50.138: INFO: Pod "pod-update-activedeadlineseconds-c173f2b4-383e-49c8-9a3b-c4a2c5b034c2" satisfied condition "terminated due to deadline exceeded" +[AfterEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:15:50.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-3177" for this suite. +•{"msg":"PASSED [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":346,"completed":93,"skipped":1809,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a pod. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:15:50.149: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename resourcequota +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-5331 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should create a ResourceQuota and capture the life of a pod. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Counting existing ResourceQuota +STEP: Creating a ResourceQuota +STEP: Ensuring resource quota status is calculated +STEP: Creating a Pod that fits quota +STEP: Ensuring ResourceQuota status captures the pod usage +STEP: Not allowing a pod to be created that exceeds remaining quota +STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) +STEP: Ensuring a pod cannot update its resource requirements +STEP: Ensuring attempts to update pod resource requirements did not change quota usage +STEP: Deleting the pod +STEP: Ensuring resource quota status released the pod usage +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:16:03.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-5331" for this suite. +•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":346,"completed":94,"skipped":1835,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:16:03.436: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-2527 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0777 on node default medium +Oct 19 16:16:03.586: INFO: Waiting up to 5m0s for pod "pod-1fc55140-17c1-4a83-9e6c-3b01f8fa3340" in namespace "emptydir-2527" to be "Succeeded or Failed" +Oct 19 16:16:03.589: INFO: Pod "pod-1fc55140-17c1-4a83-9e6c-3b01f8fa3340": Phase="Pending", Reason="", readiness=false. Elapsed: 2.868913ms +Oct 19 16:16:05.600: INFO: Pod "pod-1fc55140-17c1-4a83-9e6c-3b01f8fa3340": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013727563s +STEP: Saw pod success +Oct 19 16:16:05.600: INFO: Pod "pod-1fc55140-17c1-4a83-9e6c-3b01f8fa3340" satisfied condition "Succeeded or Failed" +Oct 19 16:16:05.603: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod pod-1fc55140-17c1-4a83-9e6c-3b01f8fa3340 container test-container: +STEP: delete the pod +Oct 19 16:16:05.616: INFO: Waiting for pod pod-1fc55140-17c1-4a83-9e6c-3b01f8fa3340 to disappear +Oct 19 16:16:05.619: INFO: Pod pod-1fc55140-17c1-4a83-9e6c-3b01f8fa3340 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:16:05.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-2527" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":95,"skipped":1847,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with configmap pod [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:16:05.629: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename subpath +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in subpath-4038 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] Atomic writer volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 +STEP: Setting up data +[It] should support subpaths with configmap pod [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating pod pod-subpath-test-configmap-4nw6 +STEP: Creating a pod to test atomic-volume-subpath +Oct 19 16:16:05.780: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-4nw6" in namespace "subpath-4038" to be "Succeeded or Failed" +Oct 19 16:16:05.784: INFO: Pod "pod-subpath-test-configmap-4nw6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.973268ms +Oct 19 16:16:07.790: INFO: Pod "pod-subpath-test-configmap-4nw6": Phase="Running", Reason="", readiness=true. Elapsed: 2.010566318s +Oct 19 16:16:09.795: INFO: Pod "pod-subpath-test-configmap-4nw6": Phase="Running", Reason="", readiness=true. Elapsed: 4.015358235s +Oct 19 16:16:11.799: INFO: Pod "pod-subpath-test-configmap-4nw6": Phase="Running", Reason="", readiness=true. Elapsed: 6.019409958s +Oct 19 16:16:13.804: INFO: Pod "pod-subpath-test-configmap-4nw6": Phase="Running", Reason="", readiness=true. Elapsed: 8.02440109s +Oct 19 16:16:15.811: INFO: Pod "pod-subpath-test-configmap-4nw6": Phase="Running", Reason="", readiness=true. Elapsed: 10.031461215s +Oct 19 16:16:17.815: INFO: Pod "pod-subpath-test-configmap-4nw6": Phase="Running", Reason="", readiness=true. Elapsed: 12.035223941s +Oct 19 16:16:19.820: INFO: Pod "pod-subpath-test-configmap-4nw6": Phase="Running", Reason="", readiness=true. Elapsed: 14.040094767s +Oct 19 16:16:21.824: INFO: Pod "pod-subpath-test-configmap-4nw6": Phase="Running", Reason="", readiness=true. Elapsed: 16.044189895s +Oct 19 16:16:23.829: INFO: Pod "pod-subpath-test-configmap-4nw6": Phase="Running", Reason="", readiness=true. Elapsed: 18.049145206s +Oct 19 16:16:25.834: INFO: Pod "pod-subpath-test-configmap-4nw6": Phase="Running", Reason="", readiness=true. Elapsed: 20.054148213s +Oct 19 16:16:27.838: INFO: Pod "pod-subpath-test-configmap-4nw6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.058088739s +STEP: Saw pod success +Oct 19 16:16:27.838: INFO: Pod "pod-subpath-test-configmap-4nw6" satisfied condition "Succeeded or Failed" +Oct 19 16:16:27.841: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod pod-subpath-test-configmap-4nw6 container test-container-subpath-configmap-4nw6: +STEP: delete the pod +Oct 19 16:16:27.854: INFO: Waiting for pod pod-subpath-test-configmap-4nw6 to disappear +Oct 19 16:16:27.857: INFO: Pod pod-subpath-test-configmap-4nw6 no longer exists +STEP: Deleting pod pod-subpath-test-configmap-4nw6 +Oct 19 16:16:27.857: INFO: Deleting pod "pod-subpath-test-configmap-4nw6" in namespace "subpath-4038" +[AfterEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:16:27.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "subpath-4038" for this suite. +•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":346,"completed":96,"skipped":1888,"failed":0} +SSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a replication controller. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:16:27.869: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename resourcequota +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-4708 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Counting existing ResourceQuota +STEP: Creating a ResourceQuota +STEP: Ensuring resource quota status is calculated +STEP: Creating a ReplicationController +STEP: Ensuring resource quota status captures replication controller creation +STEP: Deleting a ReplicationController +STEP: Ensuring resource quota status released usage +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:16:39.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-4708" for this suite. +•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":346,"completed":97,"skipped":1893,"failed":0} +SSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:16:39.057: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-307 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 19 16:16:39.206: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a49c891c-6e9d-4a0a-9ef8-5699be7de4f1" in namespace "projected-307" to be "Succeeded or Failed" +Oct 19 16:16:39.209: INFO: Pod "downwardapi-volume-a49c891c-6e9d-4a0a-9ef8-5699be7de4f1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.197454ms +Oct 19 16:16:41.214: INFO: Pod "downwardapi-volume-a49c891c-6e9d-4a0a-9ef8-5699be7de4f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008397282s +STEP: Saw pod success +Oct 19 16:16:41.214: INFO: Pod "downwardapi-volume-a49c891c-6e9d-4a0a-9ef8-5699be7de4f1" satisfied condition "Succeeded or Failed" +Oct 19 16:16:41.217: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod downwardapi-volume-a49c891c-6e9d-4a0a-9ef8-5699be7de4f1 container client-container: +STEP: delete the pod +Oct 19 16:16:41.232: INFO: Waiting for pod downwardapi-volume-a49c891c-6e9d-4a0a-9ef8-5699be7de4f1 to disappear +Oct 19 16:16:41.234: INFO: Pod downwardapi-volume-a49c891c-6e9d-4a0a-9ef8-5699be7de4f1 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:16:41.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-307" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":346,"completed":98,"skipped":1901,"failed":0} +SSSSSS +------------------------------ +[sig-network] Networking Granular Checks: Pods + should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Networking + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:16:41.243: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename pod-network-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pod-network-test-3301 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Performing setup for networking test in namespace pod-network-test-3301 +STEP: creating a selector +STEP: Creating the service pods in kubernetes +Oct 19 16:16:41.381: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +Oct 19 16:16:41.411: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Oct 19 16:16:43.415: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 19 16:16:45.415: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 19 16:16:47.415: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 19 16:16:49.416: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 19 16:16:51.414: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 19 16:16:53.415: INFO: The status of Pod netserver-0 is Running (Ready = true) +Oct 19 16:16:53.422: INFO: The status of Pod netserver-1 is Running (Ready = false) +Oct 19 16:16:55.443: INFO: The status of Pod netserver-1 is Running (Ready = false) +Oct 19 16:16:57.427: INFO: The status of Pod netserver-1 is Running (Ready = false) +Oct 19 16:16:59.428: INFO: The status of Pod netserver-1 is Running (Ready = false) +Oct 19 16:17:01.431: INFO: The status of Pod netserver-1 is Running (Ready = true) +STEP: Creating test pods +Oct 19 16:17:03.486: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 +Oct 19 16:17:03.486: INFO: Going to poll 100.96.0.128 on port 8081 at least 0 times, with a maximum of 34 tries before failing +Oct 19 16:17:03.490: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.0.128 8081 | grep -v '^\s*$'] Namespace:pod-network-test-3301 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 19 16:17:03.490: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 19 16:17:04.650: INFO: Found all 1 expected endpoints: [netserver-0] +Oct 19 16:17:04.651: INFO: Going to poll 100.96.1.61 on port 8081 at least 0 times, with a maximum of 34 tries before failing +Oct 19 16:17:04.655: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.1.61 8081 | grep -v '^\s*$'] Namespace:pod-network-test-3301 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 19 16:17:04.655: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 19 16:17:05.837: INFO: Found all 1 expected endpoints: [netserver-1] +[AfterEach] [sig-network] Networking + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:17:05.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pod-network-test-3301" for this suite. +•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":99,"skipped":1907,"failed":0} +SSSSSS +------------------------------ +[sig-node] Secrets + should be consumable via the environment [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:17:05.846: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-509 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable via the environment [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating secret secrets-509/secret-test-cc24f048-6776-4cec-af65-0a850d67462c +STEP: Creating a pod to test consume secrets +Oct 19 16:17:05.992: INFO: Waiting up to 5m0s for pod "pod-configmaps-96e05223-2137-47b4-9306-5eaa55be39a5" in namespace "secrets-509" to be "Succeeded or Failed" +Oct 19 16:17:05.996: INFO: Pod "pod-configmaps-96e05223-2137-47b4-9306-5eaa55be39a5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.921756ms +Oct 19 16:17:08.001: INFO: Pod "pod-configmaps-96e05223-2137-47b4-9306-5eaa55be39a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009037773s +STEP: Saw pod success +Oct 19 16:17:08.001: INFO: Pod "pod-configmaps-96e05223-2137-47b4-9306-5eaa55be39a5" satisfied condition "Succeeded or Failed" +Oct 19 16:17:08.004: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod pod-configmaps-96e05223-2137-47b4-9306-5eaa55be39a5 container env-test: +STEP: delete the pod +Oct 19 16:17:08.023: INFO: Waiting for pod pod-configmaps-96e05223-2137-47b4-9306-5eaa55be39a5 to disappear +Oct 19 16:17:08.027: INFO: Pod pod-configmaps-96e05223-2137-47b4-9306-5eaa55be39a5 no longer exists +[AfterEach] [sig-node] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:17:08.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-509" for this suite. +•{"msg":"PASSED [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":346,"completed":100,"skipped":1913,"failed":0} +SSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide container's memory limit [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:17:08.037: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-9588 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should provide container's memory limit [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 19 16:17:08.214: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b3c0cf5c-e89f-4321-876f-33812f4f92f0" in namespace "downward-api-9588" to be "Succeeded or Failed" +Oct 19 16:17:08.217: INFO: Pod "downwardapi-volume-b3c0cf5c-e89f-4321-876f-33812f4f92f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.971382ms +Oct 19 16:17:10.276: INFO: Pod "downwardapi-volume-b3c0cf5c-e89f-4321-876f-33812f4f92f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061814201s +STEP: Saw pod success +Oct 19 16:17:10.276: INFO: Pod "downwardapi-volume-b3c0cf5c-e89f-4321-876f-33812f4f92f0" satisfied condition "Succeeded or Failed" +Oct 19 16:17:10.281: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod downwardapi-volume-b3c0cf5c-e89f-4321-876f-33812f4f92f0 container client-container: +STEP: delete the pod +Oct 19 16:17:10.381: INFO: Waiting for pod downwardapi-volume-b3c0cf5c-e89f-4321-876f-33812f4f92f0 to disappear +Oct 19 16:17:10.384: INFO: Pod downwardapi-volume-b3c0cf5c-e89f-4321-876f-33812f4f92f0 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:17:10.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-9588" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":346,"completed":101,"skipped":1919,"failed":0} +S +------------------------------ +[sig-node] Variable Expansion + should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:17:10.474: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename var-expansion +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in var-expansion-6263 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pod with failed condition +STEP: updating the pod +Oct 19 16:19:11.144: INFO: Successfully updated pod "var-expansion-1c25ef9b-90e9-412c-b706-930d084a8f19" +STEP: waiting for pod running +STEP: deleting the pod gracefully +Oct 19 16:19:13.151: INFO: Deleting pod "var-expansion-1c25ef9b-90e9-412c-b706-930d084a8f19" in namespace "var-expansion-6263" +Oct 19 16:19:13.156: INFO: Wait up to 5m0s for pod "var-expansion-1c25ef9b-90e9-412c-b706-930d084a8f19" to be fully deleted +[AfterEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:19:45.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-6263" for this suite. +•{"msg":"PASSED [sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]","total":346,"completed":102,"skipped":1920,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] NoExecuteTaintManager Single Pod [Serial] + removing taint cancels eviction [Disruptive] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:19:45.175: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename taint-single-pod +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in taint-single-pod-3137 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/taints.go:164 +Oct 19 16:19:45.315: INFO: Waiting up to 1m0s for all nodes to be ready +Oct 19 16:20:45.350: INFO: Waiting for terminating namespaces to be deleted... +[It] removing taint cancels eviction [Disruptive] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 19 16:20:45.354: INFO: Starting informer... +STEP: Starting pod... +Oct 19 16:20:45.578: INFO: Pod is running on shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9. Tainting Node +STEP: Trying to apply a taint on the Node +STEP: verifying the node has the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute +STEP: Waiting short time to make sure Pod is queued for deletion +Oct 19 16:20:45.592: INFO: Pod wasn't evicted. Proceeding +Oct 19 16:20:45.592: INFO: Removing taint from Node +STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute +STEP: Waiting some time to make sure that toleration time passed. +Oct 19 16:22:00.606: INFO: Pod wasn't evicted. Test successful +[AfterEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:22:00.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "taint-single-pod-3137" for this suite. +•{"msg":"PASSED [sig-node] NoExecuteTaintManager Single Pod [Serial] removing taint cancels eviction [Disruptive] [Conformance]","total":346,"completed":103,"skipped":1946,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-node] Pods + should support retrieving logs from the container over websockets [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:22:00.616: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename pods +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-4309 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:188 +[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 19 16:22:00.761: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: creating the pod +STEP: submitting the pod to kubernetes +Oct 19 16:22:00.775: INFO: The status of Pod pod-logs-websocket-8d96bc79-26a6-4e3c-9a4e-9c9d41b56856 is Pending, waiting for it to be Running (with Ready = true) +Oct 19 16:22:02.780: INFO: The status of Pod pod-logs-websocket-8d96bc79-26a6-4e3c-9a4e-9c9d41b56856 is Running (Ready = true) +[AfterEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:22:02.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-4309" for this suite. +•{"msg":"PASSED [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":346,"completed":104,"skipped":1960,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should test the lifecycle of an Endpoint [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:22:02.848: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-3580 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should test the lifecycle of an Endpoint [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating an Endpoint +STEP: waiting for available Endpoint +STEP: listing all Endpoints +STEP: updating the Endpoint +STEP: fetching the Endpoint +STEP: patching the Endpoint +STEP: fetching the Endpoint +STEP: deleting the Endpoint by Collection +STEP: waiting for Endpoint deletion +STEP: fetching the Endpoint +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:22:03.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-3580" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":346,"completed":105,"skipped":1983,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-network] EndpointSlice + should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:22:03.033: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename endpointslice +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in endpointslice-2982 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 +[It] should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:22:05.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "endpointslice-2982" for this suite. +•{"msg":"PASSED [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","total":346,"completed":106,"skipped":1998,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] EndpointSlice + should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:22:05.209: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename endpointslice +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in endpointslice-3331 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 +[It] should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: referencing a single matching pod +STEP: referencing matching pods with named port +STEP: creating empty Endpoints and EndpointSlices for no matching Pods +STEP: recreating EndpointSlices after they've been deleted +Oct 19 16:22:25.448: INFO: EndpointSlice for Service endpointslice-3331/example-named-port not found +[AfterEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:22:35.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "endpointslice-3331" for this suite. +•{"msg":"PASSED [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","total":346,"completed":107,"skipped":2059,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Lease + lease API should be available [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Lease + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:22:35.469: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename lease-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in lease-test-3400 +STEP: Waiting for a default service account to be provisioned in namespace +[It] lease API should be available [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-node] Lease + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:22:35.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "lease-test-3400" for this suite. +•{"msg":"PASSED [sig-node] Lease lease API should be available [Conformance]","total":346,"completed":108,"skipped":2110,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-node] Secrets + should fail to create secret due to empty secret key [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:22:35.658: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-1171 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should fail to create secret due to empty secret key [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating projection with secret that has name secret-emptykey-test-27e7891e-25e3-4ef9-9281-b88470a78acd +[AfterEach] [sig-node] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:22:35.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-1171" for this suite. +•{"msg":"PASSED [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]","total":346,"completed":109,"skipped":2121,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Watchers + should receive events on concurrent watches in same order [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:22:35.803: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename watch +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in watch-5802 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should receive events on concurrent watches in same order [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: getting a starting resourceVersion +STEP: starting a background goroutine to produce watch events +STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order +[AfterEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:22:40.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "watch-5802" for this suite. +•{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":346,"completed":110,"skipped":2133,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a replica set. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:22:40.458: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename resourcequota +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-4608 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should create a ResourceQuota and capture the life of a replica set. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Counting existing ResourceQuota +STEP: Creating a ResourceQuota +STEP: Ensuring resource quota status is calculated +STEP: Creating a ReplicaSet +STEP: Ensuring resource quota status captures replicaset creation +STEP: Deleting a ReplicaSet +STEP: Ensuring resource quota status released usage +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:22:51.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-4608" for this suite. +•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":346,"completed":111,"skipped":2210,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Docker Containers + should be able to override the image's default command and arguments [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Docker Containers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:22:51.724: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename containers +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in containers-538 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test override all +Oct 19 16:22:51.874: INFO: Waiting up to 5m0s for pod "client-containers-fd070c9d-d771-42e4-a70f-13a4bb1ca62d" in namespace "containers-538" to be "Succeeded or Failed" +Oct 19 16:22:51.877: INFO: Pod "client-containers-fd070c9d-d771-42e4-a70f-13a4bb1ca62d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.238864ms +Oct 19 16:22:53.882: INFO: Pod "client-containers-fd070c9d-d771-42e4-a70f-13a4bb1ca62d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008012558s +STEP: Saw pod success +Oct 19 16:22:53.882: INFO: Pod "client-containers-fd070c9d-d771-42e4-a70f-13a4bb1ca62d" satisfied condition "Succeeded or Failed" +Oct 19 16:22:53.885: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod client-containers-fd070c9d-d771-42e4-a70f-13a4bb1ca62d container agnhost-container: +STEP: delete the pod +Oct 19 16:22:53.899: INFO: Waiting for pod client-containers-fd070c9d-d771-42e4-a70f-13a4bb1ca62d to disappear +Oct 19 16:22:53.902: INFO: Pod client-containers-fd070c9d-d771-42e4-a70f-13a4bb1ca62d no longer exists +[AfterEach] [sig-node] Docker Containers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:22:53.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "containers-538" for this suite. +•{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":346,"completed":112,"skipped":2245,"failed":0} +SSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should be able to deny pod and configmap creation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:22:53.912: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-2391 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 19 16:22:54.516: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 19 16:22:57.534: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should be able to deny pod and configmap creation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Registering the webhook via the AdmissionRegistration API +STEP: create a pod that should be denied by the webhook +STEP: create a pod that causes the webhook to hang +STEP: create a configmap that should be denied by the webhook +STEP: create a configmap that should be admitted by the webhook +STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook +STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook +STEP: create a namespace that bypass the webhook +STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:23:07.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-2391" for this suite. +STEP: Destroying namespace "webhook-2391-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":346,"completed":113,"skipped":2253,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] CronJob + should not schedule jobs when suspended [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:23:08.016: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename cronjob +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in cronjob-1003 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not schedule jobs when suspended [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a suspended cronjob +STEP: Ensuring no jobs are scheduled +STEP: Ensuring no job exists by listing jobs explicitly +STEP: Removing cronjob +[AfterEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:28:08.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "cronjob-1003" for this suite. + +• [SLOW TEST:300.224 seconds] +[sig-apps] CronJob +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should not schedule jobs when suspended [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","total":346,"completed":114,"skipped":2302,"failed":0} +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath + runs ReplicaSets to verify preemption running path [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:28:08.240: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename sched-preemption +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-preemption-9316 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 +Oct 19 16:28:08.391: INFO: Waiting up to 1m0s for all nodes to be ready +Oct 19 16:29:08.428: INFO: Waiting for terminating namespaces to be deleted... +[BeforeEach] PreemptionExecutionPath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:29:08.432: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename sched-preemption-path +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-preemption-path-4300 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] PreemptionExecutionPath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:488 +STEP: Finding an available node +STEP: Trying to launch a pod without a label to get a node which can launch it. +STEP: Explicitly delete pod here to free the resource it takes. +Oct 19 16:29:10.618: INFO: found a healthy node: shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 +[It] runs ReplicaSets to verify preemption running path [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 19 16:29:24.676: INFO: pods created so far: [1 1 1] +Oct 19 16:29:24.677: INFO: length of pods created so far: 3 +Oct 19 16:29:26.689: INFO: pods created so far: [2 2 1] +[AfterEach] PreemptionExecutionPath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:29:33.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-preemption-path-4300" for this suite. +[AfterEach] PreemptionExecutionPath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:462 +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:29:33.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-preemption-9316" for this suite. +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 +•{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":346,"completed":115,"skipped":2323,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:29:33.774: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-5631 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0666 on tmpfs +Oct 19 16:29:33.924: INFO: Waiting up to 5m0s for pod "pod-ca2d8e51-0465-44a1-aaba-190e6f81d98c" in namespace "emptydir-5631" to be "Succeeded or Failed" +Oct 19 16:29:33.927: INFO: Pod "pod-ca2d8e51-0465-44a1-aaba-190e6f81d98c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.024603ms +Oct 19 16:29:35.933: INFO: Pod "pod-ca2d8e51-0465-44a1-aaba-190e6f81d98c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008279859s +STEP: Saw pod success +Oct 19 16:29:35.933: INFO: Pod "pod-ca2d8e51-0465-44a1-aaba-190e6f81d98c" satisfied condition "Succeeded or Failed" +Oct 19 16:29:35.936: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod pod-ca2d8e51-0465-44a1-aaba-190e6f81d98c container test-container: +STEP: delete the pod +Oct 19 16:29:35.994: INFO: Waiting for pod pod-ca2d8e51-0465-44a1-aaba-190e6f81d98c to disappear +Oct 19 16:29:35.997: INFO: Pod pod-ca2d8e51-0465-44a1-aaba-190e6f81d98c no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:29:35.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-5631" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":116,"skipped":2334,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-node] ConfigMap + should be consumable via the environment [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:29:36.007: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-7636 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable via the environment [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap configmap-7636/configmap-test-e2980268-9733-4daa-b746-39effce51aa6 +STEP: Creating a pod to test consume configMaps +Oct 19 16:29:36.155: INFO: Waiting up to 5m0s for pod "pod-configmaps-d25fb005-0118-41d0-ae22-f36cd8364957" in namespace "configmap-7636" to be "Succeeded or Failed" +Oct 19 16:29:36.158: INFO: Pod "pod-configmaps-d25fb005-0118-41d0-ae22-f36cd8364957": Phase="Pending", Reason="", readiness=false. Elapsed: 3.095508ms +Oct 19 16:29:38.162: INFO: Pod "pod-configmaps-d25fb005-0118-41d0-ae22-f36cd8364957": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00708998s +STEP: Saw pod success +Oct 19 16:29:38.162: INFO: Pod "pod-configmaps-d25fb005-0118-41d0-ae22-f36cd8364957" satisfied condition "Succeeded or Failed" +Oct 19 16:29:38.166: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod pod-configmaps-d25fb005-0118-41d0-ae22-f36cd8364957 container env-test: +STEP: delete the pod +Oct 19 16:29:38.180: INFO: Waiting for pod pod-configmaps-d25fb005-0118-41d0-ae22-f36cd8364957 to disappear +Oct 19 16:29:38.183: INFO: Pod pod-configmaps-d25fb005-0118-41d0-ae22-f36cd8364957 no longer exists +[AfterEach] [sig-node] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:29:38.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-7636" for this suite. +•{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":346,"completed":117,"skipped":2350,"failed":0} +SSSSS +------------------------------ +[sig-cli] Kubectl client Update Demo + should create and stop a replication controller [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:29:38.193: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-6080 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[BeforeEach] Update Demo + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:296 +[It] should create and stop a replication controller [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a replication controller +Oct 19 16:29:38.335: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-6080 create -f -' +Oct 19 16:29:38.611: INFO: stderr: "" +Oct 19 16:29:38.611: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" +STEP: waiting for all containers in name=update-demo pods to come up. +Oct 19 16:29:38.611: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-6080 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 19 16:29:38.660: INFO: stderr: "" +Oct 19 16:29:38.660: INFO: stdout: "update-demo-nautilus-hz6qh update-demo-nautilus-mb2wl " +Oct 19 16:29:38.660: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-6080 get pods update-demo-nautilus-hz6qh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 19 16:29:38.707: INFO: stderr: "" +Oct 19 16:29:38.707: INFO: stdout: "" +Oct 19 16:29:38.707: INFO: update-demo-nautilus-hz6qh is created but not running +Oct 19 16:29:43.708: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-6080 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 19 16:29:43.757: INFO: stderr: "" +Oct 19 16:29:43.758: INFO: stdout: "update-demo-nautilus-hz6qh update-demo-nautilus-mb2wl " +Oct 19 16:29:43.758: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-6080 get pods update-demo-nautilus-hz6qh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 19 16:29:43.804: INFO: stderr: "" +Oct 19 16:29:43.804: INFO: stdout: "true" +Oct 19 16:29:43.804: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-6080 get pods update-demo-nautilus-hz6qh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Oct 19 16:29:43.850: INFO: stderr: "" +Oct 19 16:29:43.850: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" +Oct 19 16:29:43.850: INFO: validating pod update-demo-nautilus-hz6qh +Oct 19 16:29:43.908: INFO: got data: { + "image": "nautilus.jpg" +} + +Oct 19 16:29:43.908: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Oct 19 16:29:43.908: INFO: update-demo-nautilus-hz6qh is verified up and running +Oct 19 16:29:43.909: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-6080 get pods update-demo-nautilus-mb2wl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 19 16:29:43.958: INFO: stderr: "" +Oct 19 16:29:43.958: INFO: stdout: "true" +Oct 19 16:29:43.958: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-6080 get pods update-demo-nautilus-mb2wl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Oct 19 16:29:44.010: INFO: stderr: "" +Oct 19 16:29:44.010: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" +Oct 19 16:29:44.010: INFO: validating pod update-demo-nautilus-mb2wl +Oct 19 16:29:44.066: INFO: got data: { + "image": "nautilus.jpg" +} + +Oct 19 16:29:44.066: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Oct 19 16:29:44.066: INFO: update-demo-nautilus-mb2wl is verified up and running +STEP: using delete to clean up resources +Oct 19 16:29:44.066: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-6080 delete --grace-period=0 --force -f -' +Oct 19 16:29:44.115: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Oct 19 16:29:44.115: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" +Oct 19 16:29:44.115: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-6080 get rc,svc -l name=update-demo --no-headers' +Oct 19 16:29:44.168: INFO: stderr: "No resources found in kubectl-6080 namespace.\n" +Oct 19 16:29:44.168: INFO: stdout: "" +Oct 19 16:29:44.168: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-6080 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' +Oct 19 16:29:44.224: INFO: stderr: "" +Oct 19 16:29:44.224: INFO: stdout: "" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:29:44.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-6080" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":346,"completed":118,"skipped":2355,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for multiple CRDs of same group but different versions [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:29:44.234: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-4555 +STEP: Waiting for a default service account to be provisioned in namespace +[It] works for multiple CRDs of same group but different versions [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation +Oct 19 16:29:44.370: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation +Oct 19 16:29:55.559: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 19 16:29:58.928: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:30:11.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-4555" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":346,"completed":119,"skipped":2395,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should mutate custom resource with pruning [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:30:11.754: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-751 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 19 16:30:12.302: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 19 16:30:15.319: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should mutate custom resource with pruning [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 19 16:30:15.323: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Registering the mutating webhook for custom resource e2e-test-webhook-7951-crds.webhook.example.com via the AdmissionRegistration API +STEP: Creating a custom resource that should be mutated by the webhook +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:30:18.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-751" for this suite. +STEP: Destroying namespace "webhook-751-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":346,"completed":120,"skipped":2437,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:30:18.649: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-6243 +STEP: Waiting for a default service account to be provisioned in namespace +[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir volume type on node default medium +Oct 19 16:30:18.898: INFO: Waiting up to 5m0s for pod "pod-1d32dc06-cc34-4ccc-b0fd-a899811e15bb" in namespace "emptydir-6243" to be "Succeeded or Failed" +Oct 19 16:30:18.902: INFO: Pod "pod-1d32dc06-cc34-4ccc-b0fd-a899811e15bb": Phase="Pending", Reason="", readiness=false. Elapsed: 3.448454ms +Oct 19 16:30:20.906: INFO: Pod "pod-1d32dc06-cc34-4ccc-b0fd-a899811e15bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007708244s +STEP: Saw pod success +Oct 19 16:30:20.906: INFO: Pod "pod-1d32dc06-cc34-4ccc-b0fd-a899811e15bb" satisfied condition "Succeeded or Failed" +Oct 19 16:30:20.910: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod pod-1d32dc06-cc34-4ccc-b0fd-a899811e15bb container test-container: +STEP: delete the pod +Oct 19 16:30:20.924: INFO: Waiting for pod pod-1d32dc06-cc34-4ccc-b0fd-a899811e15bb to disappear +Oct 19 16:30:20.927: INFO: Pod pod-1d32dc06-cc34-4ccc-b0fd-a899811e15bb no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:30:20.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-6243" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":121,"skipped":2464,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:30:20.937: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-9298 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating secret with name projected-secret-test-0848a188-4394-419c-b737-6297089ce22f +STEP: Creating a pod to test consume secrets +Oct 19 16:30:21.086: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6748a0ea-2cc5-4e67-8ea4-92bdd47439b4" in namespace "projected-9298" to be "Succeeded or Failed" +Oct 19 16:30:21.090: INFO: Pod "pod-projected-secrets-6748a0ea-2cc5-4e67-8ea4-92bdd47439b4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.455622ms +Oct 19 16:30:23.095: INFO: Pod "pod-projected-secrets-6748a0ea-2cc5-4e67-8ea4-92bdd47439b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008089647s +STEP: Saw pod success +Oct 19 16:30:23.095: INFO: Pod "pod-projected-secrets-6748a0ea-2cc5-4e67-8ea4-92bdd47439b4" satisfied condition "Succeeded or Failed" +Oct 19 16:30:23.098: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod pod-projected-secrets-6748a0ea-2cc5-4e67-8ea4-92bdd47439b4 container secret-volume-test: +STEP: delete the pod +Oct 19 16:30:23.113: INFO: Waiting for pod pod-projected-secrets-6748a0ea-2cc5-4e67-8ea4-92bdd47439b4 to disappear +Oct 19 16:30:23.116: INFO: Pod pod-projected-secrets-6748a0ea-2cc5-4e67-8ea4-92bdd47439b4 no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:30:23.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-9298" for this suite. +•{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":346,"completed":122,"skipped":2475,"failed":0} +SSSSS +------------------------------ +[sig-network] Networking Granular Checks: Pods + should function for intra-pod communication: udp [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Networking + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:30:23.125: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename pod-network-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pod-network-test-341 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should function for intra-pod communication: udp [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Performing setup for networking test in namespace pod-network-test-341 +STEP: creating a selector +STEP: Creating the service pods in kubernetes +Oct 19 16:30:23.261: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +Oct 19 16:30:23.289: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Oct 19 16:30:25.294: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 19 16:30:27.293: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 19 16:30:29.295: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 19 16:30:31.294: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 19 16:30:33.294: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 19 16:30:35.294: INFO: The status of Pod netserver-0 is Running (Ready = true) +Oct 19 16:30:35.302: INFO: The status of Pod netserver-1 is Running (Ready = true) +STEP: Creating test pods +Oct 19 16:30:37.329: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 +Oct 19 16:30:37.329: INFO: Breadth first check of 100.96.0.150 on host 10.250.1.123... +Oct 19 16:30:37.333: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://100.96.0.151:9080/dial?request=hostname&protocol=udp&host=100.96.0.150&port=8081&tries=1'] Namespace:pod-network-test-341 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 19 16:30:37.333: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 19 16:30:37.526: INFO: Waiting for responses: map[] +Oct 19 16:30:37.526: INFO: reached 100.96.0.150 after 0/1 tries +Oct 19 16:30:37.526: INFO: Breadth first check of 100.96.1.64 on host 10.250.3.120... +Oct 19 16:30:37.530: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://100.96.0.151:9080/dial?request=hostname&protocol=udp&host=100.96.1.64&port=8081&tries=1'] Namespace:pod-network-test-341 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 19 16:30:37.530: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 19 16:30:37.700: INFO: Waiting for responses: map[] +Oct 19 16:30:37.700: INFO: reached 100.96.1.64 after 0/1 tries +Oct 19 16:30:37.700: INFO: Going to retry 0 out of 2 pods.... +[AfterEach] [sig-network] Networking + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:30:37.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pod-network-test-341" for this suite. +•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":346,"completed":123,"skipped":2480,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] CronJob + should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:30:37.711: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename cronjob +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in cronjob-1243 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a ForbidConcurrent cronjob +STEP: Ensuring a job is scheduled +STEP: Ensuring exactly one is scheduled +STEP: Ensuring exactly one running job exists by listing jobs explicitly +STEP: Ensuring no more jobs are scheduled +STEP: Removing cronjob +[AfterEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:36:01.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "cronjob-1243" for this suite. + +• [SLOW TEST:324.181 seconds] +[sig-apps] CronJob +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","total":346,"completed":124,"skipped":2528,"failed":0} +SSSSSS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:36:01.892: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename subpath +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in subpath-1009 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] Atomic writer volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 +STEP: Setting up data +[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating pod pod-subpath-test-configmap-95t6 +STEP: Creating a pod to test atomic-volume-subpath +Oct 19 16:36:02.058: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-95t6" in namespace "subpath-1009" to be "Succeeded or Failed" +Oct 19 16:36:02.062: INFO: Pod "pod-subpath-test-configmap-95t6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.425505ms +Oct 19 16:36:04.067: INFO: Pod "pod-subpath-test-configmap-95t6": Phase="Running", Reason="", readiness=true. Elapsed: 2.009118318s +Oct 19 16:36:06.072: INFO: Pod "pod-subpath-test-configmap-95t6": Phase="Running", Reason="", readiness=true. Elapsed: 4.013886513s +Oct 19 16:36:08.077: INFO: Pod "pod-subpath-test-configmap-95t6": Phase="Running", Reason="", readiness=true. Elapsed: 6.018945703s +Oct 19 16:36:10.082: INFO: Pod "pod-subpath-test-configmap-95t6": Phase="Running", Reason="", readiness=true. Elapsed: 8.023675628s +Oct 19 16:36:12.086: INFO: Pod "pod-subpath-test-configmap-95t6": Phase="Running", Reason="", readiness=true. Elapsed: 10.028200889s +Oct 19 16:36:14.091: INFO: Pod "pod-subpath-test-configmap-95t6": Phase="Running", Reason="", readiness=true. Elapsed: 12.03281633s +Oct 19 16:36:16.096: INFO: Pod "pod-subpath-test-configmap-95t6": Phase="Running", Reason="", readiness=true. Elapsed: 14.037850032s +Oct 19 16:36:18.100: INFO: Pod "pod-subpath-test-configmap-95t6": Phase="Running", Reason="", readiness=true. Elapsed: 16.041846634s +Oct 19 16:36:20.104: INFO: Pod "pod-subpath-test-configmap-95t6": Phase="Running", Reason="", readiness=true. Elapsed: 18.046152735s +Oct 19 16:36:22.109: INFO: Pod "pod-subpath-test-configmap-95t6": Phase="Running", Reason="", readiness=true. Elapsed: 20.050909088s +Oct 19 16:36:24.114: INFO: Pod "pod-subpath-test-configmap-95t6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.056386139s +STEP: Saw pod success +Oct 19 16:36:24.114: INFO: Pod "pod-subpath-test-configmap-95t6" satisfied condition "Succeeded or Failed" +Oct 19 16:36:24.118: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod pod-subpath-test-configmap-95t6 container test-container-subpath-configmap-95t6: +STEP: delete the pod +Oct 19 16:36:24.137: INFO: Waiting for pod pod-subpath-test-configmap-95t6 to disappear +Oct 19 16:36:24.140: INFO: Pod pod-subpath-test-configmap-95t6 no longer exists +STEP: Deleting pod pod-subpath-test-configmap-95t6 +Oct 19 16:36:24.140: INFO: Deleting pod "pod-subpath-test-configmap-95t6" in namespace "subpath-1009" +[AfterEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:36:24.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "subpath-1009" for this suite. +•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":346,"completed":125,"skipped":2534,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-network] Networking Granular Checks: Pods + should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Networking + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:36:24.152: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename pod-network-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pod-network-test-4285 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Performing setup for networking test in namespace pod-network-test-4285 +STEP: creating a selector +STEP: Creating the service pods in kubernetes +Oct 19 16:36:24.290: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +Oct 19 16:36:24.320: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Oct 19 16:36:26.325: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 19 16:36:28.326: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 19 16:36:30.325: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 19 16:36:32.325: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 19 16:36:34.324: INFO: The status of Pod netserver-0 is Running (Ready = false) +Oct 19 16:36:36.325: INFO: The status of Pod netserver-0 is Running (Ready = true) +Oct 19 16:36:36.332: INFO: The status of Pod netserver-1 is Running (Ready = true) +STEP: Creating test pods +Oct 19 16:36:38.371: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 +Oct 19 16:36:38.371: INFO: Going to poll 100.96.0.155 on port 8083 at least 0 times, with a maximum of 34 tries before failing +Oct 19 16:36:38.374: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.0.155:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-4285 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 19 16:36:38.374: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 19 16:36:38.607: INFO: Found all 1 expected endpoints: [netserver-0] +Oct 19 16:36:38.607: INFO: Going to poll 100.96.1.65 on port 8083 at least 0 times, with a maximum of 34 tries before failing +Oct 19 16:36:38.610: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.65:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-4285 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 19 16:36:38.610: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 19 16:36:38.840: INFO: Found all 1 expected endpoints: [netserver-1] +[AfterEach] [sig-network] Networking + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:36:38.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pod-network-test-4285" for this suite. +•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":126,"skipped":2549,"failed":0} +SSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:36:38.851: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-3001 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 19 16:36:38.998: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f50259db-129a-4ebd-931a-98d29a8cfcc5" in namespace "downward-api-3001" to be "Succeeded or Failed" +Oct 19 16:36:39.002: INFO: Pod "downwardapi-volume-f50259db-129a-4ebd-931a-98d29a8cfcc5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.006208ms +Oct 19 16:36:41.015: INFO: Pod "downwardapi-volume-f50259db-129a-4ebd-931a-98d29a8cfcc5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.016923156s +STEP: Saw pod success +Oct 19 16:36:41.015: INFO: Pod "downwardapi-volume-f50259db-129a-4ebd-931a-98d29a8cfcc5" satisfied condition "Succeeded or Failed" +Oct 19 16:36:41.018: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod downwardapi-volume-f50259db-129a-4ebd-931a-98d29a8cfcc5 container client-container: +STEP: delete the pod +Oct 19 16:36:41.082: INFO: Waiting for pod downwardapi-volume-f50259db-129a-4ebd-931a-98d29a8cfcc5 to disappear +Oct 19 16:36:41.085: INFO: Pod downwardapi-volume-f50259db-129a-4ebd-931a-98d29a8cfcc5 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:36:41.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-3001" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":346,"completed":127,"skipped":2558,"failed":0} +S +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:36:41.094: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-2447 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating projection with secret that has name projected-secret-test-map-a013123e-b56e-4126-9e96-c35b576c7aad +STEP: Creating a pod to test consume secrets +Oct 19 16:36:41.242: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-629d4e19-cf77-4c3a-98c6-9d6db23070a8" in namespace "projected-2447" to be "Succeeded or Failed" +Oct 19 16:36:41.248: INFO: Pod "pod-projected-secrets-629d4e19-cf77-4c3a-98c6-9d6db23070a8": Phase="Pending", Reason="", readiness=false. Elapsed: 5.522254ms +Oct 19 16:36:43.253: INFO: Pod "pod-projected-secrets-629d4e19-cf77-4c3a-98c6-9d6db23070a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010287396s +STEP: Saw pod success +Oct 19 16:36:43.253: INFO: Pod "pod-projected-secrets-629d4e19-cf77-4c3a-98c6-9d6db23070a8" satisfied condition "Succeeded or Failed" +Oct 19 16:36:43.256: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod pod-projected-secrets-629d4e19-cf77-4c3a-98c6-9d6db23070a8 container projected-secret-volume-test: +STEP: delete the pod +Oct 19 16:36:43.270: INFO: Waiting for pod pod-projected-secrets-629d4e19-cf77-4c3a-98c6-9d6db23070a8 to disappear +Oct 19 16:36:43.272: INFO: Pod pod-projected-secrets-629d4e19-cf77-4c3a-98c6-9d6db23070a8 no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:36:43.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-2447" for this suite. +•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":346,"completed":128,"skipped":2559,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] DisruptionController + should observe PodDisruptionBudget status updated [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:36:43.282: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename disruption +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in disruption-9020 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 +[It] should observe PodDisruptionBudget status updated [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Waiting for the pdb to be processed +STEP: Waiting for all pods to be running +Oct 19 16:36:45.462: INFO: running pods: 0 < 3 +[AfterEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:36:47.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "disruption-9020" for this suite. +•{"msg":"PASSED [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]","total":346,"completed":129,"skipped":2596,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch + watch on custom resource definition objects [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:36:47.491: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename crd-watch +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-watch-751 +STEP: Waiting for a default service account to be provisioned in namespace +[It] watch on custom resource definition objects [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 19 16:36:47.630: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Creating first CR +Oct 19 16:36:50.187: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-10-19T16:36:50Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-10-19T16:36:50Z]] name:name1 resourceVersion:21326 uid:857a0096-b6d4-491e-a1e0-655b68fa9295] num:map[num1:9223372036854775807 num2:1000000]]} +STEP: Creating second CR +Oct 19 16:37:00.195: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-10-19T16:37:00Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-10-19T16:37:00Z]] name:name2 resourceVersion:21394 uid:b59befa8-381c-4a0e-b8c8-4f8a8f45e869] num:map[num1:9223372036854775807 num2:1000000]]} +STEP: Modifying first CR +Oct 19 16:37:10.274: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-10-19T16:36:50Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-10-19T16:37:10Z]] name:name1 resourceVersion:21439 uid:857a0096-b6d4-491e-a1e0-655b68fa9295] num:map[num1:9223372036854775807 num2:1000000]]} +STEP: Modifying second CR +Oct 19 16:37:20.283: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-10-19T16:37:00Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-10-19T16:37:20Z]] name:name2 resourceVersion:21482 uid:b59befa8-381c-4a0e-b8c8-4f8a8f45e869] num:map[num1:9223372036854775807 num2:1000000]]} +STEP: Deleting first CR +Oct 19 16:37:30.288: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-10-19T16:36:50Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-10-19T16:37:10Z]] name:name1 resourceVersion:21549 uid:857a0096-b6d4-491e-a1e0-655b68fa9295] num:map[num1:9223372036854775807 num2:1000000]]} +STEP: Deleting second CR +Oct 19 16:37:40.297: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-10-19T16:37:00Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-10-19T16:37:20Z]] name:name2 resourceVersion:21592 uid:b59befa8-381c-4a0e-b8c8-4f8a8f45e869] num:map[num1:9223372036854775807 num2:1000000]]} +[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:37:50.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-watch-751" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":346,"completed":130,"skipped":2610,"failed":0} +SSS +------------------------------ +[sig-apps] ReplicaSet + Replace and Patch tests [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:37:50.819: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename replicaset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replicaset-6281 +STEP: Waiting for a default service account to be provisioned in namespace +[It] Replace and Patch tests [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 19 16:37:50.985: INFO: Pod name sample-pod: Found 0 pods out of 1 +Oct 19 16:37:55.989: INFO: Pod name sample-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running +STEP: Scaling up "test-rs" replicaset +Oct 19 16:37:55.996: INFO: Updating replica set "test-rs" +STEP: patching the ReplicaSet +Oct 19 16:37:56.004: INFO: observed ReplicaSet test-rs in namespace replicaset-6281 with ReadyReplicas 1, AvailableReplicas 1 +Oct 19 16:37:56.008: INFO: observed ReplicaSet test-rs in namespace replicaset-6281 with ReadyReplicas 1, AvailableReplicas 1 +Oct 19 16:37:56.020: INFO: observed ReplicaSet test-rs in namespace replicaset-6281 with ReadyReplicas 1, AvailableReplicas 1 +Oct 19 16:37:56.023: INFO: observed ReplicaSet test-rs in namespace replicaset-6281 with ReadyReplicas 1, AvailableReplicas 1 +Oct 19 16:37:56.893: INFO: observed ReplicaSet test-rs in namespace replicaset-6281 with ReadyReplicas 2, AvailableReplicas 2 +Oct 19 16:37:57.086: INFO: observed Replicaset test-rs in namespace replicaset-6281 with ReadyReplicas 3 found true +[AfterEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:37:57.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replicaset-6281" for this suite. +•{"msg":"PASSED [sig-apps] ReplicaSet Replace and Patch tests [Conformance]","total":346,"completed":131,"skipped":2613,"failed":0} +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl cluster-info + should check if Kubernetes control plane services is included in cluster-info [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:37:57.096: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-662 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should check if Kubernetes control plane services is included in cluster-info [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: validating cluster-info +Oct 19 16:37:57.231: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-662 cluster-info' +Oct 19 16:37:57.297: INFO: stderr: "" +Oct 19 16:37:57.297: INFO: stdout: "\x1b[0;32mKubernetes control plane\x1b[0m is running at \x1b[0;33mhttps://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:37:57.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-662" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info [Conformance]","total":346,"completed":132,"skipped":2630,"failed":0} +SSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should update annotations on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:37:57.305: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-7032 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should update annotations on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating the pod +Oct 19 16:37:57.460: INFO: The status of Pod annotationupdatec1c4762b-a38c-4269-b451-5f0c4269a6d0 is Pending, waiting for it to be Running (with Ready = true) +Oct 19 16:37:59.474: INFO: The status of Pod annotationupdatec1c4762b-a38c-4269-b451-5f0c4269a6d0 is Running (Ready = true) +Oct 19 16:38:00.041: INFO: Successfully updated pod "annotationupdatec1c4762b-a38c-4269-b451-5f0c4269a6d0" +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:38:02.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-7032" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":346,"completed":133,"skipped":2639,"failed":0} +SSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl diff + should check if kubectl diff finds a difference for Deployments [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:38:02.111: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-8692 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should check if kubectl diff finds a difference for Deployments [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create deployment with httpd image +Oct 19 16:38:02.247: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-8692 create -f -' +Oct 19 16:38:02.374: INFO: stderr: "" +Oct 19 16:38:02.374: INFO: stdout: "deployment.apps/httpd-deployment created\n" +STEP: verify diff finds difference between live and declared image +Oct 19 16:38:02.374: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-8692 diff -f -' +Oct 19 16:38:02.514: INFO: rc: 1 +Oct 19 16:38:02.514: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-8692 delete -f -' +Oct 19 16:38:02.562: INFO: stderr: "" +Oct 19 16:38:02.562: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:38:02.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-8692" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":346,"completed":134,"skipped":2645,"failed":0} +SS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:38:02.596: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename statefulset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-3582 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:92 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:107 +STEP: Creating service test in namespace statefulset-3582 +[It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating stateful set ss in namespace statefulset-3582 +STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-3582 +Oct 19 16:38:02.742: INFO: Found 0 stateful pods, waiting for 1 +Oct 19 16:38:12.746: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod +Oct 19 16:38:12.749: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-3582 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Oct 19 16:38:13.002: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Oct 19 16:38:13.002: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Oct 19 16:38:13.002: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Oct 19 16:38:13.006: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true +Oct 19 16:38:23.011: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false +Oct 19 16:38:23.011: INFO: Waiting for statefulset status.replicas updated to 0 +Oct 19 16:38:23.025: INFO: POD NODE PHASE GRACE CONDITIONS +Oct 19 16:38:23.025: INFO: ss-0 shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-19 16:38:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-19 16:38:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-19 16:38:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-19 16:38:02 +0000 UTC }] +Oct 19 16:38:23.025: INFO: +Oct 19 16:38:23.025: INFO: StatefulSet ss has not reached scale 3, at 1 +Oct 19 16:38:24.028: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.996747243s +Oct 19 16:38:25.036: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.992721914s +Oct 19 16:38:26.041: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.985292455s +Oct 19 16:38:27.046: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.979996232s +Oct 19 16:38:28.050: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.974915025s +Oct 19 16:38:29.055: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.97099581s +Oct 19 16:38:30.059: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.966285943s +Oct 19 16:38:31.064: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.962025471s +Oct 19 16:38:32.069: INFO: Verifying statefulset ss doesn't scale past 3 for another 956.801502ms +STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-3582 +Oct 19 16:38:33.074: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-3582 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 19 16:38:33.365: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Oct 19 16:38:33.365: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Oct 19 16:38:33.365: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Oct 19 16:38:33.365: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-3582 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 19 16:38:33.642: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" +Oct 19 16:38:33.643: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Oct 19 16:38:33.643: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Oct 19 16:38:33.643: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-3582 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 19 16:38:33.951: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" +Oct 19 16:38:33.951: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Oct 19 16:38:33.951: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Oct 19 16:38:33.955: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +Oct 19 16:38:33.955: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true +Oct 19 16:38:33.955: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true +STEP: Scale down will not halt with unhealthy stateful pod +Oct 19 16:38:33.958: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-3582 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Oct 19 16:38:34.177: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Oct 19 16:38:34.177: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Oct 19 16:38:34.177: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Oct 19 16:38:34.177: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-3582 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Oct 19 16:38:34.400: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Oct 19 16:38:34.400: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Oct 19 16:38:34.400: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Oct 19 16:38:34.400: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-3582 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Oct 19 16:38:34.596: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Oct 19 16:38:34.596: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Oct 19 16:38:34.596: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Oct 19 16:38:34.596: INFO: Waiting for statefulset status.replicas updated to 0 +Oct 19 16:38:34.599: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 +Oct 19 16:38:44.610: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false +Oct 19 16:38:44.610: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false +Oct 19 16:38:44.610: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false +Oct 19 16:38:44.623: INFO: POD NODE PHASE GRACE CONDITIONS +Oct 19 16:38:44.623: INFO: ss-0 shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-19 16:38:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-19 16:38:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-19 16:38:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-19 16:38:02 +0000 UTC }] +Oct 19 16:38:44.623: INFO: ss-1 shoot--it--tmhay-ddd-worker-1-z1-67558-zh8gq Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-19 16:38:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-19 16:38:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-19 16:38:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-19 16:38:23 +0000 UTC }] +Oct 19 16:38:44.623: INFO: ss-2 shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-19 16:38:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-19 16:38:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-19 16:38:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-19 16:38:23 +0000 UTC }] +Oct 19 16:38:44.623: INFO: +Oct 19 16:38:44.623: INFO: StatefulSet ss has not reached scale 0, at 3 +Oct 19 16:38:45.627: INFO: Verifying statefulset ss doesn't scale past 0 for another 8.994197394s +Oct 19 16:38:46.632: INFO: Verifying statefulset ss doesn't scale past 0 for another 7.990691521s +Oct 19 16:38:47.637: INFO: Verifying statefulset ss doesn't scale past 0 for another 6.986381241s +Oct 19 16:38:48.641: INFO: Verifying statefulset ss doesn't scale past 0 for another 5.981206337s +Oct 19 16:38:49.646: INFO: Verifying statefulset ss doesn't scale past 0 for another 4.976309304s +Oct 19 16:38:50.650: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.972232658s +Oct 19 16:38:51.654: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.968326943s +Oct 19 16:38:52.658: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.964326105s +Oct 19 16:38:53.662: INFO: Verifying statefulset ss doesn't scale past 0 for another 959.981109ms +STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-3582 +Oct 19 16:38:54.667: INFO: Scaling statefulset ss to 0 +Oct 19 16:38:54.679: INFO: Waiting for statefulset status.replicas updated to 0 +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118 +Oct 19 16:38:54.682: INFO: Deleting all statefulset in ns statefulset-3582 +Oct 19 16:38:54.685: INFO: Scaling statefulset ss to 0 +Oct 19 16:38:54.695: INFO: Waiting for statefulset status.replicas updated to 0 +Oct 19 16:38:54.698: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:38:54.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-3582" for this suite. +•{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":346,"completed":135,"skipped":2647,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-auth] ServiceAccounts + should mount an API token into pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:38:54.720: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename svcaccounts +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in svcaccounts-80 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should mount an API token into pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: getting the auto-created API token +STEP: reading a file in the container +Oct 19 16:38:57.385: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl exec --namespace=svcaccounts-80 pod-service-account-428dfffe-f449-4120-9d0b-794c797f75ed -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' +STEP: reading a file in the container +Oct 19 16:38:57.593: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl exec --namespace=svcaccounts-80 pod-service-account-428dfffe-f449-4120-9d0b-794c797f75ed -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' +STEP: reading a file in the container +Oct 19 16:38:57.792: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl exec --namespace=svcaccounts-80 pod-service-account-428dfffe-f449-4120-9d0b-794c797f75ed -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' +[AfterEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:38:58.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svcaccounts-80" for this suite. +•{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":346,"completed":136,"skipped":2658,"failed":0} +S +------------------------------ +[sig-network] Services + should find a service from listing all namespaces [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:38:58.037: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-1477 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should find a service from listing all namespaces [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: fetching services +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:38:58.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-1477" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":346,"completed":137,"skipped":2659,"failed":0} +S +------------------------------ +[sig-api-machinery] Garbage collector + should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:38:58.184: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename gc +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-635 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the rc +STEP: delete the rc +STEP: wait for the rc to be deleted +STEP: Gathering metrics +W1019 16:39:04.361875 4339 metrics_grabber.go:151] Can't find kube-controller-manager pod. Grabbing metrics from kube-controller-manager is disabled. +Oct 19 16:39:04.362: INFO: For apiserver_request_total: +For apiserver_request_latency_seconds: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:39:04.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-635" for this suite. +•{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":346,"completed":138,"skipped":2660,"failed":0} +SSSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + Should recreate evicted statefulset [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:39:04.370: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename statefulset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-8245 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:92 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:107 +STEP: Creating service test in namespace statefulset-8245 +[It] Should recreate evicted statefulset [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Looking for a node to schedule stateful set and pod +STEP: Creating pod with conflicting port in namespace statefulset-8245 +STEP: Waiting until pod test-pod will start running in namespace statefulset-8245 +STEP: Creating statefulset with conflicting port in namespace statefulset-8245 +STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-8245 +Oct 19 16:39:06.551: INFO: Observed stateful pod in namespace: statefulset-8245, name: ss-0, uid: 5befccb1-023a-4100-92a5-d8a0751ab488, status phase: Pending. Waiting for statefulset controller to delete. +Oct 19 16:39:06.561: INFO: Observed stateful pod in namespace: statefulset-8245, name: ss-0, uid: 5befccb1-023a-4100-92a5-d8a0751ab488, status phase: Failed. Waiting for statefulset controller to delete. +Oct 19 16:39:06.571: INFO: Observed stateful pod in namespace: statefulset-8245, name: ss-0, uid: 5befccb1-023a-4100-92a5-d8a0751ab488, status phase: Failed. Waiting for statefulset controller to delete. +Oct 19 16:39:06.572: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-8245 +STEP: Removing pod with conflicting port in namespace statefulset-8245 +STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-8245 and will be in running state +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118 +Oct 19 16:39:08.586: INFO: Deleting all statefulset in ns statefulset-8245 +Oct 19 16:39:08.589: INFO: Scaling statefulset ss to 0 +Oct 19 16:39:18.654: INFO: Waiting for statefulset status.replicas updated to 0 +Oct 19 16:39:18.657: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:39:18.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-8245" for this suite. +•{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":346,"completed":139,"skipped":2665,"failed":0} +SSSSSSS +------------------------------ +[sig-scheduling] SchedulerPreemption [Serial] + validates lower priority pod preemption by critical pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:39:18.677: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename sched-preemption +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-preemption-6344 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 +Oct 19 16:39:18.826: INFO: Waiting up to 1m0s for all nodes to be ready +Oct 19 16:40:18.866: INFO: Waiting for terminating namespaces to be deleted... +[It] validates lower priority pod preemption by critical pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Create pods that use 4/5 of node resources. +Oct 19 16:40:18.888: INFO: Created pod: pod0-0-sched-preemption-low-priority +Oct 19 16:40:18.896: INFO: Created pod: pod0-1-sched-preemption-medium-priority +Oct 19 16:40:18.912: INFO: Created pod: pod1-0-sched-preemption-medium-priority +Oct 19 16:40:18.921: INFO: Created pod: pod1-1-sched-preemption-medium-priority +STEP: Wait for pods to be scheduled. +STEP: Run a critical pod that use same resources as that of a lower priority pod +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:40:34.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-preemption-6344" for this suite. +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 +•{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":346,"completed":140,"skipped":2672,"failed":0} +SS +------------------------------ +[sig-apps] Daemon set [Serial] + should rollback without unnecessary restarts [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:40:35.024: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename daemonsets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in daemonsets-4533 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:142 +[It] should rollback without unnecessary restarts [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 19 16:40:35.180: INFO: Create a RollingUpdate DaemonSet +Oct 19 16:40:35.184: INFO: Check that daemon pods launch on every node of the cluster +Oct 19 16:40:35.190: INFO: Number of nodes with available pods: 0 +Oct 19 16:40:35.191: INFO: Node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 is running more than one daemon pod +Oct 19 16:40:36.200: INFO: Number of nodes with available pods: 0 +Oct 19 16:40:36.201: INFO: Node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 is running more than one daemon pod +Oct 19 16:40:37.201: INFO: Number of nodes with available pods: 2 +Oct 19 16:40:37.201: INFO: Number of running nodes: 2, number of available pods: 2 +Oct 19 16:40:37.201: INFO: Update the DaemonSet to trigger a rollout +Oct 19 16:40:37.210: INFO: Updating DaemonSet daemon-set +Oct 19 16:40:40.227: INFO: Roll back the DaemonSet before rollout is complete +Oct 19 16:40:40.233: INFO: Updating DaemonSet daemon-set +Oct 19 16:40:40.234: INFO: Make sure DaemonSet rollback is complete +Oct 19 16:40:40.237: INFO: Wrong image for pod: daemon-set-zz5jc. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. +Oct 19 16:40:40.237: INFO: Pod daemon-set-zz5jc is not available +Oct 19 16:40:44.247: INFO: Pod daemon-set-fj7xr is not available +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:108 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4533, will wait for the garbage collector to delete the pods +Oct 19 16:40:44.315: INFO: Deleting DaemonSet.extensions daemon-set took: 4.513237ms +Oct 19 16:40:44.415: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.131656ms +Oct 19 16:40:47.420: INFO: Number of nodes with available pods: 0 +Oct 19 16:40:47.420: INFO: Number of running nodes: 0, number of available pods: 0 +Oct 19 16:40:47.423: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"23043"},"items":null} + +Oct 19 16:40:47.426: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"23043"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:40:47.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-4533" for this suite. +•{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":346,"completed":141,"skipped":2674,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Secrets + should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:40:47.449: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-1005 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secret-namespace-7581 +STEP: Creating secret with name secret-test-456b9ac2-2ddf-48aa-89b6-7f0cf9433784 +STEP: Creating a pod to test consume secrets +Oct 19 16:40:47.730: INFO: Waiting up to 5m0s for pod "pod-secrets-25db4e4b-b4ef-44d2-b167-1137d6fdf08d" in namespace "secrets-1005" to be "Succeeded or Failed" +Oct 19 16:40:47.735: INFO: Pod "pod-secrets-25db4e4b-b4ef-44d2-b167-1137d6fdf08d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.097973ms +Oct 19 16:40:49.740: INFO: Pod "pod-secrets-25db4e4b-b4ef-44d2-b167-1137d6fdf08d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009095808s +STEP: Saw pod success +Oct 19 16:40:49.740: INFO: Pod "pod-secrets-25db4e4b-b4ef-44d2-b167-1137d6fdf08d" satisfied condition "Succeeded or Failed" +Oct 19 16:40:49.743: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod pod-secrets-25db4e4b-b4ef-44d2-b167-1137d6fdf08d container secret-volume-test: +STEP: delete the pod +Oct 19 16:40:49.804: INFO: Waiting for pod pod-secrets-25db4e4b-b4ef-44d2-b167-1137d6fdf08d to disappear +Oct 19 16:40:49.809: INFO: Pod pod-secrets-25db4e4b-b4ef-44d2-b167-1137d6fdf08d no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:40:49.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-1005" for this suite. +STEP: Destroying namespace "secret-namespace-7581" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":346,"completed":142,"skipped":2709,"failed":0} +SS +------------------------------ +[sig-storage] EmptyDir wrapper volumes + should not conflict [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir wrapper volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:40:49.823: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir-wrapper +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-wrapper-923 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not conflict [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 19 16:40:49.978: INFO: The status of Pod pod-secrets-30ecfdde-e603-4115-93d2-d50227f02c28 is Pending, waiting for it to be Running (with Ready = true) +Oct 19 16:40:51.982: INFO: The status of Pod pod-secrets-30ecfdde-e603-4115-93d2-d50227f02c28 is Running (Ready = true) +STEP: Cleaning up the secret +STEP: Cleaning up the configmap +STEP: Cleaning up the pod +[AfterEach] [sig-storage] EmptyDir wrapper volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:40:52.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-wrapper-923" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":346,"completed":143,"skipped":2711,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Secrets + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:40:52.025: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-75 +STEP: Waiting for a default service account to be provisioned in namespace +[It] optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating secret with name s-test-opt-del-4ff694e4-5744-4493-9efe-2f82f569e9c6 +STEP: Creating secret with name s-test-opt-upd-ac67b378-1ccd-4f0e-ae22-5cf817514b56 +STEP: Creating the pod +Oct 19 16:40:52.210: INFO: The status of Pod pod-secrets-3dd5967b-4e2a-4269-bc0e-f7e7969dd252 is Pending, waiting for it to be Running (with Ready = true) +Oct 19 16:40:54.215: INFO: The status of Pod pod-secrets-3dd5967b-4e2a-4269-bc0e-f7e7969dd252 is Running (Ready = true) +STEP: Deleting secret s-test-opt-del-4ff694e4-5744-4493-9efe-2f82f569e9c6 +STEP: Updating secret s-test-opt-upd-ac67b378-1ccd-4f0e-ae22-5cf817514b56 +STEP: Creating secret with name s-test-opt-create-f8bb11ad-ede9-487e-ac1a-100b4cf55779 +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:40:56.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-75" for this suite. +•{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":144,"skipped":2738,"failed":0} +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + should be able to convert a non homogeneous list of CRs [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:40:56.420: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename crd-webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-webhook-8303 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 +STEP: Setting up server cert +STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication +STEP: Deploying the custom resource conversion webhook pod +STEP: Wait for the deployment to be ready +Oct 19 16:40:57.006: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 19 16:41:00.024: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 +[It] should be able to convert a non homogeneous list of CRs [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 19 16:41:00.028: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Creating a v1 custom resource +STEP: Create a v2 custom resource +STEP: List CRs in v1 +STEP: List CRs in v2 +[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:41:03.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-webhook-8303" for this suite. +[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 +•{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":346,"completed":145,"skipped":2756,"failed":0} + +------------------------------ +[sig-storage] Projected downwardAPI + should provide container's cpu limit [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:41:03.386: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-8265 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should provide container's cpu limit [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 19 16:41:03.531: INFO: Waiting up to 5m0s for pod "downwardapi-volume-22ab0e55-bad8-47ec-b500-fa44d9e6a3ad" in namespace "projected-8265" to be "Succeeded or Failed" +Oct 19 16:41:03.535: INFO: Pod "downwardapi-volume-22ab0e55-bad8-47ec-b500-fa44d9e6a3ad": Phase="Pending", Reason="", readiness=false. Elapsed: 3.071401ms +Oct 19 16:41:05.539: INFO: Pod "downwardapi-volume-22ab0e55-bad8-47ec-b500-fa44d9e6a3ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007832499s +STEP: Saw pod success +Oct 19 16:41:05.539: INFO: Pod "downwardapi-volume-22ab0e55-bad8-47ec-b500-fa44d9e6a3ad" satisfied condition "Succeeded or Failed" +Oct 19 16:41:05.542: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod downwardapi-volume-22ab0e55-bad8-47ec-b500-fa44d9e6a3ad container client-container: +STEP: delete the pod +Oct 19 16:41:05.597: INFO: Waiting for pod downwardapi-volume-22ab0e55-bad8-47ec-b500-fa44d9e6a3ad to disappear +Oct 19 16:41:05.600: INFO: Pod downwardapi-volume-22ab0e55-bad8-47ec-b500-fa44d9e6a3ad no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:41:05.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-8265" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":346,"completed":146,"skipped":2756,"failed":0} +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for CRD with validation schema [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:41:05.609: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-4581 +STEP: Waiting for a default service account to be provisioned in namespace +[It] works for CRD with validation schema [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 19 16:41:05.744: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: client-side validation (kubectl create and apply) allows request with known and required properties +Oct 19 16:41:09.105: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-4581 --namespace=crd-publish-openapi-4581 create -f -' +Oct 19 16:41:09.390: INFO: stderr: "" +Oct 19 16:41:09.390: INFO: stdout: "e2e-test-crd-publish-openapi-8154-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" +Oct 19 16:41:09.390: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-4581 --namespace=crd-publish-openapi-4581 delete e2e-test-crd-publish-openapi-8154-crds test-foo' +Oct 19 16:41:09.452: INFO: stderr: "" +Oct 19 16:41:09.452: INFO: stdout: "e2e-test-crd-publish-openapi-8154-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" +Oct 19 16:41:09.452: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-4581 --namespace=crd-publish-openapi-4581 apply -f -' +Oct 19 16:41:09.592: INFO: stderr: "" +Oct 19 16:41:09.592: INFO: stdout: "e2e-test-crd-publish-openapi-8154-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" +Oct 19 16:41:09.592: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-4581 --namespace=crd-publish-openapi-4581 delete e2e-test-crd-publish-openapi-8154-crds test-foo' +Oct 19 16:41:09.642: INFO: stderr: "" +Oct 19 16:41:09.642: INFO: stdout: "e2e-test-crd-publish-openapi-8154-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" +STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema +Oct 19 16:41:09.642: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-4581 --namespace=crd-publish-openapi-4581 create -f -' +Oct 19 16:41:09.763: INFO: rc: 1 +Oct 19 16:41:09.763: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-4581 --namespace=crd-publish-openapi-4581 apply -f -' +Oct 19 16:41:09.897: INFO: rc: 1 +STEP: client-side validation (kubectl create and apply) rejects request without required properties +Oct 19 16:41:09.897: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-4581 --namespace=crd-publish-openapi-4581 create -f -' +Oct 19 16:41:10.015: INFO: rc: 1 +Oct 19 16:41:10.015: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-4581 --namespace=crd-publish-openapi-4581 apply -f -' +Oct 19 16:41:10.131: INFO: rc: 1 +STEP: kubectl explain works to explain CR properties +Oct 19 16:41:10.131: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-4581 explain e2e-test-crd-publish-openapi-8154-crds' +Oct 19 16:41:10.252: INFO: stderr: "" +Oct 19 16:41:10.252: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-8154-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" +STEP: kubectl explain works to explain CR properties recursively +Oct 19 16:41:10.252: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-4581 explain e2e-test-crd-publish-openapi-8154-crds.metadata' +Oct 19 16:41:10.373: INFO: stderr: "" +Oct 19 16:41:10.373: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-8154-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n NOT return a 409 - instead, it will either return 201 Created or 500 with\n Reason ServerTimeout indicating a unique name could not be found in the\n time allotted, and the client should retry (optionally after the time\n indicated in the Retry-After header).\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only.\n\n DEPRECATED Kubernetes will stop propagating this field in 1.20 release and\n the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" +Oct 19 16:41:10.374: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-4581 explain e2e-test-crd-publish-openapi-8154-crds.spec' +Oct 19 16:41:10.507: INFO: stderr: "" +Oct 19 16:41:10.507: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-8154-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" +Oct 19 16:41:10.507: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-4581 explain e2e-test-crd-publish-openapi-8154-crds.spec.bars' +Oct 19 16:41:10.625: INFO: stderr: "" +Oct 19 16:41:10.625: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-8154-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" +STEP: kubectl explain works to return error when explain is called on property that doesn't exist +Oct 19 16:41:10.626: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-4581 explain e2e-test-crd-publish-openapi-8154-crds.spec.bars2' +Oct 19 16:41:10.745: INFO: rc: 1 +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:41:13.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-4581" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":346,"completed":147,"skipped":2776,"failed":0} +S +------------------------------ +[sig-storage] ConfigMap + binary data should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:41:13.761: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-7997 +STEP: Waiting for a default service account to be provisioned in namespace +[It] binary data should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name configmap-test-upd-9b545035-d9e5-46bf-a9b6-03bf8a64290d +STEP: Creating the pod +STEP: Waiting for pod with text data +STEP: Waiting for pod with binary data +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:41:16.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-7997" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":148,"skipped":2777,"failed":0} +S +------------------------------ +[sig-network] Services + should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:41:16.075: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-2307 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating service in namespace services-2307 +STEP: creating service affinity-clusterip in namespace services-2307 +STEP: creating replication controller affinity-clusterip in namespace services-2307 +I1019 16:41:16.224405 4339 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-2307, replica count: 3 +I1019 16:41:19.275619 4339 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Oct 19 16:41:19.282: INFO: Creating new exec pod +Oct 19 16:41:22.302: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-2307 exec execpod-affinityj9m7x -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' +Oct 19 16:41:22.593: INFO: stderr: "+ nc -v -t -w 2 affinity-clusterip 80\n+ echo hostName\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" +Oct 19 16:41:22.593: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 19 16:41:22.593: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-2307 exec execpod-affinityj9m7x -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.64.65.26 80' +Oct 19 16:41:22.775: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 100.64.65.26 80\nConnection to 100.64.65.26 80 port [tcp/http] succeeded!\n" +Oct 19 16:41:22.775: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 19 16:41:22.775: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-2307 exec execpod-affinityj9m7x -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://100.64.65.26:80/ ; done' +Oct 19 16:41:23.051: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.65.26:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.65.26:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.65.26:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.65.26:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.65.26:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.65.26:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.65.26:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.65.26:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.65.26:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.65.26:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.65.26:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.65.26:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.65.26:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.65.26:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.65.26:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.65.26:80/\n" +Oct 19 16:41:23.051: INFO: stdout: "\naffinity-clusterip-2684v\naffinity-clusterip-2684v\naffinity-clusterip-2684v\naffinity-clusterip-2684v\naffinity-clusterip-2684v\naffinity-clusterip-2684v\naffinity-clusterip-2684v\naffinity-clusterip-2684v\naffinity-clusterip-2684v\naffinity-clusterip-2684v\naffinity-clusterip-2684v\naffinity-clusterip-2684v\naffinity-clusterip-2684v\naffinity-clusterip-2684v\naffinity-clusterip-2684v\naffinity-clusterip-2684v" +Oct 19 16:41:23.051: INFO: Received response from host: affinity-clusterip-2684v +Oct 19 16:41:23.051: INFO: Received response from host: affinity-clusterip-2684v +Oct 19 16:41:23.051: INFO: Received response from host: affinity-clusterip-2684v +Oct 19 16:41:23.051: INFO: Received response from host: affinity-clusterip-2684v +Oct 19 16:41:23.051: INFO: Received response from host: affinity-clusterip-2684v +Oct 19 16:41:23.051: INFO: Received response from host: affinity-clusterip-2684v +Oct 19 16:41:23.051: INFO: Received response from host: affinity-clusterip-2684v +Oct 19 16:41:23.051: INFO: Received response from host: affinity-clusterip-2684v +Oct 19 16:41:23.051: INFO: Received response from host: affinity-clusterip-2684v +Oct 19 16:41:23.051: INFO: Received response from host: affinity-clusterip-2684v +Oct 19 16:41:23.051: INFO: Received response from host: affinity-clusterip-2684v +Oct 19 16:41:23.051: INFO: Received response from host: affinity-clusterip-2684v +Oct 19 16:41:23.051: INFO: Received response from host: affinity-clusterip-2684v +Oct 19 16:41:23.051: INFO: Received response from host: affinity-clusterip-2684v +Oct 19 16:41:23.051: INFO: Received response from host: affinity-clusterip-2684v +Oct 19 16:41:23.051: INFO: Received response from host: affinity-clusterip-2684v +Oct 19 16:41:23.051: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-clusterip in namespace services-2307, will wait for the garbage collector to delete the pods +Oct 19 16:41:23.114: INFO: Deleting ReplicationController affinity-clusterip took: 3.781807ms +Oct 19 16:41:23.215: INFO: Terminating ReplicationController affinity-clusterip pods took: 100.346372ms +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:41:25.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-2307" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":346,"completed":149,"skipped":2778,"failed":0} +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Watchers + should observe add, update, and delete watch notifications on configmaps [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:41:25.533: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename watch +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in watch-1597 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should observe add, update, and delete watch notifications on configmaps [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a watch on configmaps with label A +STEP: creating a watch on configmaps with label B +STEP: creating a watch on configmaps with label A or B +STEP: creating a configmap with label A and ensuring the correct watchers observe the notification +Oct 19 16:41:25.678: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1597 e3bc99c4-3139-4695-87e8-f5852140a0e3 23527 0 2021-10-19 16:41:25 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-10-19 16:41:25 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 19 16:41:25.678: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1597 e3bc99c4-3139-4695-87e8-f5852140a0e3 23527 0 2021-10-19 16:41:25 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-10-19 16:41:25 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: modifying configmap A and ensuring the correct watchers observe the notification +Oct 19 16:41:35.686: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1597 e3bc99c4-3139-4695-87e8-f5852140a0e3 23601 0 2021-10-19 16:41:25 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-10-19 16:41:35 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 19 16:41:35.686: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1597 e3bc99c4-3139-4695-87e8-f5852140a0e3 23601 0 2021-10-19 16:41:25 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-10-19 16:41:35 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: modifying configmap A again and ensuring the correct watchers observe the notification +Oct 19 16:41:45.696: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1597 e3bc99c4-3139-4695-87e8-f5852140a0e3 23645 0 2021-10-19 16:41:25 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-10-19 16:41:35 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 19 16:41:45.696: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1597 e3bc99c4-3139-4695-87e8-f5852140a0e3 23645 0 2021-10-19 16:41:25 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-10-19 16:41:35 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: deleting configmap A and ensuring the correct watchers observe the notification +Oct 19 16:41:55.704: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1597 e3bc99c4-3139-4695-87e8-f5852140a0e3 23687 0 2021-10-19 16:41:25 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-10-19 16:41:35 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 19 16:41:55.705: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1597 e3bc99c4-3139-4695-87e8-f5852140a0e3 23687 0 2021-10-19 16:41:25 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-10-19 16:41:35 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: creating a configmap with label B and ensuring the correct watchers observe the notification +Oct 19 16:42:05.713: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-1597 44e3c7f5-e9ed-4b59-a32f-6cb135e5a8f5 23730 0 2021-10-19 16:42:05 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-10-19 16:42:05 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 19 16:42:05.713: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-1597 44e3c7f5-e9ed-4b59-a32f-6cb135e5a8f5 23730 0 2021-10-19 16:42:05 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-10-19 16:42:05 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: deleting configmap B and ensuring the correct watchers observe the notification +Oct 19 16:42:15.720: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-1597 44e3c7f5-e9ed-4b59-a32f-6cb135e5a8f5 23775 0 2021-10-19 16:42:05 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-10-19 16:42:05 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 19 16:42:15.720: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-1597 44e3c7f5-e9ed-4b59-a32f-6cb135e5a8f5 23775 0 2021-10-19 16:42:05 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-10-19 16:42:05 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +[AfterEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:42:25.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "watch-1597" for this suite. +•{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":346,"completed":150,"skipped":2797,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition + listing custom resource definition objects works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:42:25.733: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename custom-resource-definition +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in custom-resource-definition-5038 +STEP: Waiting for a default service account to be provisioned in namespace +[It] listing custom resource definition objects works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 19 16:42:25.870: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:42:32.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "custom-resource-definition-5038" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":346,"completed":151,"skipped":2859,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Kubelet when scheduling a busybox Pod with hostAliases + should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:42:32.880: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubelet-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubelet-test-3514 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 +[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 19 16:42:33.270: INFO: The status of Pod busybox-host-aliasesb1b712a6-beba-4b4a-bd4b-2c57e4fe3c21 is Pending, waiting for it to be Running (with Ready = true) +Oct 19 16:42:35.275: INFO: The status of Pod busybox-host-aliasesb1b712a6-beba-4b4a-bd4b-2c57e4fe3c21 is Running (Ready = true) +[AfterEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:42:35.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubelet-test-3514" for this suite. +•{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":152,"skipped":2875,"failed":0} +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-scheduling] SchedulerPredicates [Serial] + validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:42:35.478: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename sched-pred +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-pred-7751 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 +Oct 19 16:42:35.685: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready +Oct 19 16:42:35.693: INFO: Waiting for terminating namespaces to be deleted... +Oct 19 16:42:35.696: INFO: +Logging pods the apiserver thinks is on node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 before test +Oct 19 16:42:35.705: INFO: addons-nginx-ingress-controller-6ccd9d5d4d-87wtm from kube-system started at 2021-10-19 16:20:45 +0000 UTC (1 container statuses recorded) +Oct 19 16:42:35.705: INFO: Container nginx-ingress-controller ready: true, restart count 0 +Oct 19 16:42:35.705: INFO: apiserver-proxy-ftftt from kube-system started at 2021-10-19 15:45:29 +0000 UTC (2 container statuses recorded) +Oct 19 16:42:35.705: INFO: Container proxy ready: true, restart count 0 +Oct 19 16:42:35.705: INFO: Container sidecar ready: true, restart count 0 +Oct 19 16:42:35.705: INFO: blackbox-exporter-65c549b94c-c5pzd from kube-system started at 2021-10-19 15:51:26 +0000 UTC (1 container statuses recorded) +Oct 19 16:42:35.705: INFO: Container blackbox-exporter ready: true, restart count 0 +Oct 19 16:42:35.705: INFO: calico-kube-controllers-86c64d79ff-hmgq6 from kube-system started at 2021-10-19 15:45:29 +0000 UTC (1 container statuses recorded) +Oct 19 16:42:35.705: INFO: Container calico-kube-controllers ready: true, restart count 0 +Oct 19 16:42:35.705: INFO: calico-node-gkqll from kube-system started at 2021-10-19 15:46:29 +0000 UTC (1 container statuses recorded) +Oct 19 16:42:35.705: INFO: Container calico-node ready: true, restart count 0 +Oct 19 16:42:35.705: INFO: calico-typha-deploy-58b94ff46-kljnn from kube-system started at 2021-10-19 15:45:29 +0000 UTC (1 container statuses recorded) +Oct 19 16:42:35.705: INFO: Container calico-typha ready: true, restart count 0 +Oct 19 16:42:35.705: INFO: csi-driver-node-twl5g from kube-system started at 2021-10-19 15:45:29 +0000 UTC (3 container statuses recorded) +Oct 19 16:42:35.705: INFO: Container csi-driver ready: true, restart count 0 +Oct 19 16:42:35.705: INFO: Container csi-liveness-probe ready: true, restart count 0 +Oct 19 16:42:35.705: INFO: Container csi-node-driver-registrar ready: true, restart count 0 +Oct 19 16:42:35.705: INFO: kube-proxy-hgtmc from kube-system started at 2021-10-19 15:47:27 +0000 UTC (2 container statuses recorded) +Oct 19 16:42:35.705: INFO: Container conntrack-fix ready: true, restart count 0 +Oct 19 16:42:35.705: INFO: Container kube-proxy ready: true, restart count 0 +Oct 19 16:42:35.705: INFO: node-exporter-v9h4r from kube-system started at 2021-10-19 15:45:29 +0000 UTC (1 container statuses recorded) +Oct 19 16:42:35.705: INFO: Container node-exporter ready: true, restart count 0 +Oct 19 16:42:35.705: INFO: node-problem-detector-2s6bt from kube-system started at 2021-10-19 16:11:27 +0000 UTC (1 container statuses recorded) +Oct 19 16:42:35.705: INFO: Container node-problem-detector ready: true, restart count 0 +Oct 19 16:42:35.705: INFO: busybox-host-aliasesb1b712a6-beba-4b4a-bd4b-2c57e4fe3c21 from kubelet-test-3514 started at 2021-10-19 16:42:33 +0000 UTC (1 container statuses recorded) +Oct 19 16:42:35.705: INFO: Container busybox-host-aliasesb1b712a6-beba-4b4a-bd4b-2c57e4fe3c21 ready: true, restart count 0 +Oct 19 16:42:35.705: INFO: +Logging pods the apiserver thinks is on node shoot--it--tmhay-ddd-worker-1-z1-67558-zh8gq before test +Oct 19 16:42:35.713: INFO: addons-nginx-ingress-nginx-ingress-k8s-backend-56d9d84c8c-ftj5w from kube-system started at 2021-10-19 15:45:49 +0000 UTC (1 container statuses recorded) +Oct 19 16:42:35.713: INFO: Container nginx-ingress-nginx-ingress-k8s-backend ready: true, restart count 0 +Oct 19 16:42:35.713: INFO: apiserver-proxy-r6qsz from kube-system started at 2021-10-19 15:45:29 +0000 UTC (2 container statuses recorded) +Oct 19 16:42:35.713: INFO: Container proxy ready: true, restart count 0 +Oct 19 16:42:35.713: INFO: Container sidecar ready: true, restart count 0 +Oct 19 16:42:35.713: INFO: calico-node-54s6z from kube-system started at 2021-10-19 15:46:29 +0000 UTC (1 container statuses recorded) +Oct 19 16:42:35.713: INFO: Container calico-node ready: true, restart count 0 +Oct 19 16:42:35.713: INFO: calico-node-vertical-autoscaler-785b5f968-w77tx from kube-system started at 2021-10-19 15:45:49 +0000 UTC (1 container statuses recorded) +Oct 19 16:42:35.713: INFO: Container autoscaler ready: true, restart count 0 +Oct 19 16:42:35.713: INFO: calico-typha-horizontal-autoscaler-5b58bb446c-bqq7q from kube-system started at 2021-10-19 15:45:49 +0000 UTC (1 container statuses recorded) +Oct 19 16:42:35.713: INFO: Container autoscaler ready: true, restart count 0 +Oct 19 16:42:35.713: INFO: calico-typha-vertical-autoscaler-5c9655cddd-w2d9c from kube-system started at 2021-10-19 15:45:49 +0000 UTC (1 container statuses recorded) +Oct 19 16:42:35.713: INFO: Container autoscaler ready: true, restart count 0 +Oct 19 16:42:35.713: INFO: coredns-9866fb499-7zgkw from kube-system started at 2021-10-19 15:45:49 +0000 UTC (1 container statuses recorded) +Oct 19 16:42:35.713: INFO: Container coredns ready: true, restart count 0 +Oct 19 16:42:35.713: INFO: coredns-9866fb499-kcm5k from kube-system started at 2021-10-19 15:45:49 +0000 UTC (1 container statuses recorded) +Oct 19 16:42:35.713: INFO: Container coredns ready: true, restart count 0 +Oct 19 16:42:35.713: INFO: csi-driver-node-ps5fs from kube-system started at 2021-10-19 15:45:29 +0000 UTC (3 container statuses recorded) +Oct 19 16:42:35.713: INFO: Container csi-driver ready: true, restart count 0 +Oct 19 16:42:35.713: INFO: Container csi-liveness-probe ready: true, restart count 0 +Oct 19 16:42:35.713: INFO: Container csi-node-driver-registrar ready: true, restart count 0 +Oct 19 16:42:35.713: INFO: kube-proxy-dpksr from kube-system started at 2021-10-19 15:47:27 +0000 UTC (2 container statuses recorded) +Oct 19 16:42:35.713: INFO: Container conntrack-fix ready: true, restart count 0 +Oct 19 16:42:35.713: INFO: Container kube-proxy ready: true, restart count 0 +Oct 19 16:42:35.713: INFO: metrics-server-7958497998-bdvjq from kube-system started at 2021-10-19 15:45:49 +0000 UTC (1 container statuses recorded) +Oct 19 16:42:35.713: INFO: Container metrics-server ready: true, restart count 0 +Oct 19 16:42:35.713: INFO: node-exporter-2xtzn from kube-system started at 2021-10-19 15:45:29 +0000 UTC (1 container statuses recorded) +Oct 19 16:42:35.713: INFO: Container node-exporter ready: true, restart count 0 +Oct 19 16:42:35.713: INFO: node-problem-detector-6n9vb from kube-system started at 2021-10-19 16:11:28 +0000 UTC (1 container statuses recorded) +Oct 19 16:42:35.713: INFO: Container node-problem-detector ready: true, restart count 0 +Oct 19 16:42:35.713: INFO: vpn-shoot-6cdd4985bc-w7qgp from kube-system started at 2021-10-19 15:45:49 +0000 UTC (1 container statuses recorded) +Oct 19 16:42:35.713: INFO: Container vpn-shoot ready: true, restart count 0 +Oct 19 16:42:35.713: INFO: dashboard-metrics-scraper-7ccbfc448f-htlbk from kubernetes-dashboard started at 2021-10-19 15:45:49 +0000 UTC (1 container statuses recorded) +Oct 19 16:42:35.713: INFO: Container dashboard-metrics-scraper ready: true, restart count 0 +Oct 19 16:42:35.713: INFO: kubernetes-dashboard-847f4ffdcd-6s4nf from kubernetes-dashboard started at 2021-10-19 15:45:49 +0000 UTC (1 container statuses recorded) +Oct 19 16:42:35.713: INFO: Container kubernetes-dashboard ready: true, restart count 2 +[It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Trying to launch a pod without a label to get a node which can launch it. +STEP: Explicitly delete pod here to free the resource it takes. +STEP: Trying to apply a random label on the found node. +STEP: verifying the node has the label kubernetes.io/e2e-fbaf47de-9e00-4ac8-8154-91806a5f581b 95 +STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled +STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 10.250.1.123 on the node which pod4 resides and expect not scheduled +STEP: removing the label kubernetes.io/e2e-fbaf47de-9e00-4ac8-8154-91806a5f581b off the node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 +STEP: verifying the node doesn't have the label kubernetes.io/e2e-fbaf47de-9e00-4ac8-8154-91806a5f581b +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:47:39.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-pred-7751" for this suite. +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 + +• [SLOW TEST:304.402 seconds] +[sig-scheduling] SchedulerPredicates [Serial] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 + validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":346,"completed":153,"skipped":2893,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should not be blocked by dependency circle [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:47:39.881: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename gc +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-1587 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not be blocked by dependency circle [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 19 16:47:40.056: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"b39da181-a180-4e8c-a94c-9a3c0a044f62", Controller:(*bool)(0xc0060adde6), BlockOwnerDeletion:(*bool)(0xc0060adde7)}} +Oct 19 16:47:40.062: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"37271d33-8a11-4d41-a52a-b165e95bba39", Controller:(*bool)(0xc00601d73e), BlockOwnerDeletion:(*bool)(0xc00601d73f)}} +Oct 19 16:47:40.067: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"1a2e34dd-1913-4d57-b9e8-cb920c97ee8a", Controller:(*bool)(0xc0060568de), BlockOwnerDeletion:(*bool)(0xc0060568df)}} +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:47:45.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-1587" for this suite. +•{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":346,"completed":154,"skipped":2908,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + listing validating webhooks should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:47:45.085: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-372 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 19 16:47:45.443: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 19 16:47:48.554: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] listing validating webhooks should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Listing all of the created validation webhooks +STEP: Creating a configMap that does not comply to the validation webhook rules +STEP: Deleting the collection of validation webhooks +STEP: Creating a configMap that does not comply to the validation webhook rules +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:47:48.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-372" for this suite. +STEP: Destroying namespace "webhook-372-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":346,"completed":155,"skipped":2922,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Variable Expansion + should fail substituting values in a volume subpath with backticks [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:47:48.792: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename var-expansion +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in var-expansion-5807 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should fail substituting values in a volume subpath with backticks [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 19 16:47:50.950: INFO: Deleting pod "var-expansion-3760ab10-5fbd-454f-890d-23d4f764bfed" in namespace "var-expansion-5807" +Oct 19 16:47:50.955: INFO: Wait up to 5m0s for pod "var-expansion-3760ab10-5fbd-454f-890d-23d4f764bfed" to be fully deleted +[AfterEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:47:54.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-5807" for this suite. +•{"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]","total":346,"completed":156,"skipped":2945,"failed":0} +SSSSSSSSS +------------------------------ +[sig-node] Pods + should delete a collection of pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:47:54.973: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename pods +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-7102 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:188 +[It] should delete a collection of pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Create set of pods +Oct 19 16:47:55.121: INFO: created test-pod-1 +Oct 19 16:47:55.129: INFO: created test-pod-2 +Oct 19 16:47:55.138: INFO: created test-pod-3 +STEP: waiting for all 3 pods to be located +STEP: waiting for all pods to be deleted +Oct 19 16:47:55.160: INFO: Pod quantity 3 is different from expected quantity 0 +Oct 19 16:47:56.165: INFO: Pod quantity 3 is different from expected quantity 0 +Oct 19 16:47:57.164: INFO: Pod quantity 3 is different from expected quantity 0 +[AfterEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:47:58.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-7102" for this suite. +•{"msg":"PASSED [sig-node] Pods should delete a collection of pods [Conformance]","total":346,"completed":157,"skipped":2954,"failed":0} +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should honor timeout [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:47:58.175: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-3325 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 19 16:47:58.570: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 19 16:48:01.589: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should honor timeout [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Setting timeout (1s) shorter than webhook latency (5s) +STEP: Registering slow webhook via the AdmissionRegistration API +STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) +STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore +STEP: Registering slow webhook via the AdmissionRegistration API +STEP: Having no error when timeout is longer than webhook latency +STEP: Registering slow webhook via the AdmissionRegistration API +STEP: Having no error when timeout is empty (defaulted to 10s in v1) +STEP: Registering slow webhook via the AdmissionRegistration API +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:48:13.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-3325" for this suite. +STEP: Destroying namespace "webhook-3325-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":346,"completed":158,"skipped":2973,"failed":0} +S +------------------------------ +[sig-scheduling] SchedulerPredicates [Serial] + validates that NodeSelector is respected if not matching [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:48:13.954: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename sched-pred +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-pred-5993 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 +Oct 19 16:48:14.100: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready +Oct 19 16:48:14.107: INFO: Waiting for terminating namespaces to be deleted... +Oct 19 16:48:14.110: INFO: +Logging pods the apiserver thinks is on node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 before test +Oct 19 16:48:14.118: INFO: addons-nginx-ingress-controller-6ccd9d5d4d-87wtm from kube-system started at 2021-10-19 16:20:45 +0000 UTC (1 container statuses recorded) +Oct 19 16:48:14.118: INFO: Container nginx-ingress-controller ready: true, restart count 0 +Oct 19 16:48:14.118: INFO: apiserver-proxy-ftftt from kube-system started at 2021-10-19 15:45:29 +0000 UTC (2 container statuses recorded) +Oct 19 16:48:14.118: INFO: Container proxy ready: true, restart count 0 +Oct 19 16:48:14.118: INFO: Container sidecar ready: true, restart count 0 +Oct 19 16:48:14.118: INFO: blackbox-exporter-65c549b94c-c5pzd from kube-system started at 2021-10-19 15:51:26 +0000 UTC (1 container statuses recorded) +Oct 19 16:48:14.118: INFO: Container blackbox-exporter ready: true, restart count 0 +Oct 19 16:48:14.118: INFO: calico-kube-controllers-86c64d79ff-hmgq6 from kube-system started at 2021-10-19 15:45:29 +0000 UTC (1 container statuses recorded) +Oct 19 16:48:14.118: INFO: Container calico-kube-controllers ready: true, restart count 0 +Oct 19 16:48:14.118: INFO: calico-node-gkqll from kube-system started at 2021-10-19 15:46:29 +0000 UTC (1 container statuses recorded) +Oct 19 16:48:14.118: INFO: Container calico-node ready: true, restart count 0 +Oct 19 16:48:14.118: INFO: calico-typha-deploy-58b94ff46-kljnn from kube-system started at 2021-10-19 15:45:29 +0000 UTC (1 container statuses recorded) +Oct 19 16:48:14.118: INFO: Container calico-typha ready: true, restart count 0 +Oct 19 16:48:14.118: INFO: csi-driver-node-twl5g from kube-system started at 2021-10-19 15:45:29 +0000 UTC (3 container statuses recorded) +Oct 19 16:48:14.118: INFO: Container csi-driver ready: true, restart count 0 +Oct 19 16:48:14.118: INFO: Container csi-liveness-probe ready: true, restart count 0 +Oct 19 16:48:14.118: INFO: Container csi-node-driver-registrar ready: true, restart count 0 +Oct 19 16:48:14.118: INFO: kube-proxy-hgtmc from kube-system started at 2021-10-19 15:47:27 +0000 UTC (2 container statuses recorded) +Oct 19 16:48:14.118: INFO: Container conntrack-fix ready: true, restart count 0 +Oct 19 16:48:14.118: INFO: Container kube-proxy ready: true, restart count 0 +Oct 19 16:48:14.118: INFO: node-exporter-v9h4r from kube-system started at 2021-10-19 15:45:29 +0000 UTC (1 container statuses recorded) +Oct 19 16:48:14.118: INFO: Container node-exporter ready: true, restart count 0 +Oct 19 16:48:14.118: INFO: node-problem-detector-2s6bt from kube-system started at 2021-10-19 16:11:27 +0000 UTC (1 container statuses recorded) +Oct 19 16:48:14.118: INFO: Container node-problem-detector ready: true, restart count 0 +Oct 19 16:48:14.118: INFO: +Logging pods the apiserver thinks is on node shoot--it--tmhay-ddd-worker-1-z1-67558-zh8gq before test +Oct 19 16:48:14.125: INFO: addons-nginx-ingress-nginx-ingress-k8s-backend-56d9d84c8c-ftj5w from kube-system started at 2021-10-19 15:45:49 +0000 UTC (1 container statuses recorded) +Oct 19 16:48:14.125: INFO: Container nginx-ingress-nginx-ingress-k8s-backend ready: true, restart count 0 +Oct 19 16:48:14.125: INFO: apiserver-proxy-r6qsz from kube-system started at 2021-10-19 15:45:29 +0000 UTC (2 container statuses recorded) +Oct 19 16:48:14.125: INFO: Container proxy ready: true, restart count 0 +Oct 19 16:48:14.125: INFO: Container sidecar ready: true, restart count 0 +Oct 19 16:48:14.125: INFO: calico-node-54s6z from kube-system started at 2021-10-19 15:46:29 +0000 UTC (1 container statuses recorded) +Oct 19 16:48:14.125: INFO: Container calico-node ready: true, restart count 0 +Oct 19 16:48:14.125: INFO: calico-node-vertical-autoscaler-785b5f968-w77tx from kube-system started at 2021-10-19 15:45:49 +0000 UTC (1 container statuses recorded) +Oct 19 16:48:14.125: INFO: Container autoscaler ready: true, restart count 0 +Oct 19 16:48:14.125: INFO: calico-typha-horizontal-autoscaler-5b58bb446c-bqq7q from kube-system started at 2021-10-19 15:45:49 +0000 UTC (1 container statuses recorded) +Oct 19 16:48:14.125: INFO: Container autoscaler ready: true, restart count 0 +Oct 19 16:48:14.125: INFO: calico-typha-vertical-autoscaler-5c9655cddd-w2d9c from kube-system started at 2021-10-19 15:45:49 +0000 UTC (1 container statuses recorded) +Oct 19 16:48:14.125: INFO: Container autoscaler ready: true, restart count 0 +Oct 19 16:48:14.125: INFO: coredns-9866fb499-7zgkw from kube-system started at 2021-10-19 15:45:49 +0000 UTC (1 container statuses recorded) +Oct 19 16:48:14.126: INFO: Container coredns ready: true, restart count 0 +Oct 19 16:48:14.126: INFO: coredns-9866fb499-kcm5k from kube-system started at 2021-10-19 15:45:49 +0000 UTC (1 container statuses recorded) +Oct 19 16:48:14.126: INFO: Container coredns ready: true, restart count 0 +Oct 19 16:48:14.126: INFO: csi-driver-node-ps5fs from kube-system started at 2021-10-19 15:45:29 +0000 UTC (3 container statuses recorded) +Oct 19 16:48:14.126: INFO: Container csi-driver ready: true, restart count 0 +Oct 19 16:48:14.126: INFO: Container csi-liveness-probe ready: true, restart count 0 +Oct 19 16:48:14.126: INFO: Container csi-node-driver-registrar ready: true, restart count 0 +Oct 19 16:48:14.126: INFO: kube-proxy-dpksr from kube-system started at 2021-10-19 15:47:27 +0000 UTC (2 container statuses recorded) +Oct 19 16:48:14.126: INFO: Container conntrack-fix ready: true, restart count 0 +Oct 19 16:48:14.126: INFO: Container kube-proxy ready: true, restart count 0 +Oct 19 16:48:14.126: INFO: metrics-server-7958497998-bdvjq from kube-system started at 2021-10-19 15:45:49 +0000 UTC (1 container statuses recorded) +Oct 19 16:48:14.126: INFO: Container metrics-server ready: true, restart count 0 +Oct 19 16:48:14.126: INFO: node-exporter-2xtzn from kube-system started at 2021-10-19 15:45:29 +0000 UTC (1 container statuses recorded) +Oct 19 16:48:14.126: INFO: Container node-exporter ready: true, restart count 0 +Oct 19 16:48:14.126: INFO: node-problem-detector-6n9vb from kube-system started at 2021-10-19 16:11:28 +0000 UTC (1 container statuses recorded) +Oct 19 16:48:14.126: INFO: Container node-problem-detector ready: true, restart count 0 +Oct 19 16:48:14.126: INFO: vpn-shoot-6cdd4985bc-w7qgp from kube-system started at 2021-10-19 15:45:49 +0000 UTC (1 container statuses recorded) +Oct 19 16:48:14.126: INFO: Container vpn-shoot ready: true, restart count 0 +Oct 19 16:48:14.126: INFO: dashboard-metrics-scraper-7ccbfc448f-htlbk from kubernetes-dashboard started at 2021-10-19 15:45:49 +0000 UTC (1 container statuses recorded) +Oct 19 16:48:14.126: INFO: Container dashboard-metrics-scraper ready: true, restart count 0 +Oct 19 16:48:14.126: INFO: kubernetes-dashboard-847f4ffdcd-6s4nf from kubernetes-dashboard started at 2021-10-19 15:45:49 +0000 UTC (1 container statuses recorded) +Oct 19 16:48:14.126: INFO: Container kubernetes-dashboard ready: true, restart count 2 +[It] validates that NodeSelector is respected if not matching [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Trying to schedule Pod with nonempty NodeSelector. +STEP: Considering event: +Type = [Warning], Name = [restricted-pod.16af7c8389cdb5dc], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match Pod's node affinity/selector.] +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:48:15.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-pred-5993" for this suite. +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 +•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":346,"completed":159,"skipped":2974,"failed":0} +SS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with downward pod [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:48:15.168: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename subpath +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in subpath-5768 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] Atomic writer volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 +STEP: Setting up data +[It] should support subpaths with downward pod [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating pod pod-subpath-test-downwardapi-nwlq +STEP: Creating a pod to test atomic-volume-subpath +Oct 19 16:48:15.324: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-nwlq" in namespace "subpath-5768" to be "Succeeded or Failed" +Oct 19 16:48:15.327: INFO: Pod "pod-subpath-test-downwardapi-nwlq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.868844ms +Oct 19 16:48:17.332: INFO: Pod "pod-subpath-test-downwardapi-nwlq": Phase="Running", Reason="", readiness=true. Elapsed: 2.007634818s +Oct 19 16:48:19.337: INFO: Pod "pod-subpath-test-downwardapi-nwlq": Phase="Running", Reason="", readiness=true. Elapsed: 4.0129426s +Oct 19 16:48:21.342: INFO: Pod "pod-subpath-test-downwardapi-nwlq": Phase="Running", Reason="", readiness=true. Elapsed: 6.017872404s +Oct 19 16:48:23.370: INFO: Pod "pod-subpath-test-downwardapi-nwlq": Phase="Running", Reason="", readiness=true. Elapsed: 8.046486913s +Oct 19 16:48:25.375: INFO: Pod "pod-subpath-test-downwardapi-nwlq": Phase="Running", Reason="", readiness=true. Elapsed: 10.051270406s +Oct 19 16:48:27.380: INFO: Pod "pod-subpath-test-downwardapi-nwlq": Phase="Running", Reason="", readiness=true. Elapsed: 12.055862305s +Oct 19 16:48:29.384: INFO: Pod "pod-subpath-test-downwardapi-nwlq": Phase="Running", Reason="", readiness=true. Elapsed: 14.059944143s +Oct 19 16:48:31.388: INFO: Pod "pod-subpath-test-downwardapi-nwlq": Phase="Running", Reason="", readiness=true. Elapsed: 16.06454987s +Oct 19 16:48:33.394: INFO: Pod "pod-subpath-test-downwardapi-nwlq": Phase="Running", Reason="", readiness=true. Elapsed: 18.06974306s +Oct 19 16:48:35.398: INFO: Pod "pod-subpath-test-downwardapi-nwlq": Phase="Running", Reason="", readiness=true. Elapsed: 20.07432155s +Oct 19 16:48:37.408: INFO: Pod "pod-subpath-test-downwardapi-nwlq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.084349701s +STEP: Saw pod success +Oct 19 16:48:37.408: INFO: Pod "pod-subpath-test-downwardapi-nwlq" satisfied condition "Succeeded or Failed" +Oct 19 16:48:37.416: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod pod-subpath-test-downwardapi-nwlq container test-container-subpath-downwardapi-nwlq: +STEP: delete the pod +Oct 19 16:48:37.439: INFO: Waiting for pod pod-subpath-test-downwardapi-nwlq to disappear +Oct 19 16:48:37.442: INFO: Pod pod-subpath-test-downwardapi-nwlq no longer exists +STEP: Deleting pod pod-subpath-test-downwardapi-nwlq +Oct 19 16:48:37.442: INFO: Deleting pod "pod-subpath-test-downwardapi-nwlq" in namespace "subpath-5768" +[AfterEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:48:37.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "subpath-5768" for this suite. +•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":346,"completed":160,"skipped":2976,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should delete pods created by rc when not orphaning [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:48:37.456: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename gc +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-820 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should delete pods created by rc when not orphaning [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the rc +STEP: delete the rc +STEP: wait for all pods to be garbage collected +STEP: Gathering metrics +Oct 19 16:48:47.622: INFO: For apiserver_request_total: +For apiserver_request_latency_seconds: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:48:47.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +W1019 16:48:47.622636 4339 metrics_grabber.go:151] Can't find kube-controller-manager pod. Grabbing metrics from kube-controller-manager is disabled. +STEP: Destroying namespace "gc-820" for this suite. +•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":346,"completed":161,"skipped":3054,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Variable Expansion + should allow substituting values in a volume subpath [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:48:47.631: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename var-expansion +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in var-expansion-6858 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should allow substituting values in a volume subpath [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test substitution in volume subpath +Oct 19 16:48:47.780: INFO: Waiting up to 5m0s for pod "var-expansion-80af9759-cee2-4ade-a88e-0467498c364a" in namespace "var-expansion-6858" to be "Succeeded or Failed" +Oct 19 16:48:47.783: INFO: Pod "var-expansion-80af9759-cee2-4ade-a88e-0467498c364a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.234978ms +Oct 19 16:48:49.787: INFO: Pod "var-expansion-80af9759-cee2-4ade-a88e-0467498c364a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007636821s +STEP: Saw pod success +Oct 19 16:48:49.787: INFO: Pod "var-expansion-80af9759-cee2-4ade-a88e-0467498c364a" satisfied condition "Succeeded or Failed" +Oct 19 16:48:49.791: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod var-expansion-80af9759-cee2-4ade-a88e-0467498c364a container dapi-container: +STEP: delete the pod +Oct 19 16:48:49.806: INFO: Waiting for pod var-expansion-80af9759-cee2-4ade-a88e-0467498c364a to disappear +Oct 19 16:48:49.809: INFO: Pod var-expansion-80af9759-cee2-4ade-a88e-0467498c364a no longer exists +[AfterEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:48:49.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-6858" for this suite. +•{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]","total":346,"completed":162,"skipped":3103,"failed":0} +SSS +------------------------------ +[sig-cli] Kubectl client Kubectl api-versions + should check if v1 is in available api versions [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:48:49.819: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-6298 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should check if v1 is in available api versions [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: validating api versions +Oct 19 16:48:49.954: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-6298 api-versions' +Oct 19 16:48:50.030: INFO: stderr: "" +Oct 19 16:48:50.030: INFO: stdout: "admissionregistration.k8s.io/v1\napiextensions.k8s.io/v1\napiregistration.k8s.io/v1\napps/v1\nauthentication.k8s.io/v1\nauthorization.k8s.io/v1\nautoscaling.k8s.io/v1\nautoscaling.k8s.io/v1beta2\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncert.gardener.cloud/v1alpha1\ncertificates.k8s.io/v1\ncoordination.k8s.io/v1\ncrd.projectcalico.org/v1\ndiscovery.k8s.io/v1\ndiscovery.k8s.io/v1beta1\ndns.gardener.cloud/v1alpha1\nevents.k8s.io/v1\nevents.k8s.io/v1beta1\nflowcontrol.apiserver.k8s.io/v1beta1\nmetrics.k8s.io/v1beta1\nnetworking.k8s.io/v1\nnode.k8s.io/v1\nnode.k8s.io/v1beta1\npolicy/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nscheduling.k8s.io/v1\nsnapshot.storage.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:48:50.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-6298" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":346,"completed":163,"skipped":3106,"failed":0} +SSSSS +------------------------------ +[sig-scheduling] SchedulerPredicates [Serial] + validates resource limits of pods that are allowed to run [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:48:50.038: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename sched-pred +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-pred-5853 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 +Oct 19 16:48:50.171: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready +Oct 19 16:48:50.178: INFO: Waiting for terminating namespaces to be deleted... +Oct 19 16:48:50.181: INFO: +Logging pods the apiserver thinks is on node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 before test +Oct 19 16:48:50.189: INFO: addons-nginx-ingress-controller-6ccd9d5d4d-87wtm from kube-system started at 2021-10-19 16:20:45 +0000 UTC (1 container statuses recorded) +Oct 19 16:48:50.189: INFO: Container nginx-ingress-controller ready: true, restart count 0 +Oct 19 16:48:50.189: INFO: apiserver-proxy-ftftt from kube-system started at 2021-10-19 15:45:29 +0000 UTC (2 container statuses recorded) +Oct 19 16:48:50.189: INFO: Container proxy ready: true, restart count 0 +Oct 19 16:48:50.189: INFO: Container sidecar ready: true, restart count 0 +Oct 19 16:48:50.189: INFO: blackbox-exporter-65c549b94c-c5pzd from kube-system started at 2021-10-19 15:51:26 +0000 UTC (1 container statuses recorded) +Oct 19 16:48:50.189: INFO: Container blackbox-exporter ready: true, restart count 0 +Oct 19 16:48:50.189: INFO: calico-kube-controllers-86c64d79ff-hmgq6 from kube-system started at 2021-10-19 15:45:29 +0000 UTC (1 container statuses recorded) +Oct 19 16:48:50.189: INFO: Container calico-kube-controllers ready: true, restart count 0 +Oct 19 16:48:50.189: INFO: calico-node-gkqll from kube-system started at 2021-10-19 15:46:29 +0000 UTC (1 container statuses recorded) +Oct 19 16:48:50.189: INFO: Container calico-node ready: true, restart count 0 +Oct 19 16:48:50.189: INFO: calico-typha-deploy-58b94ff46-kljnn from kube-system started at 2021-10-19 15:45:29 +0000 UTC (1 container statuses recorded) +Oct 19 16:48:50.189: INFO: Container calico-typha ready: true, restart count 0 +Oct 19 16:48:50.189: INFO: csi-driver-node-twl5g from kube-system started at 2021-10-19 15:45:29 +0000 UTC (3 container statuses recorded) +Oct 19 16:48:50.189: INFO: Container csi-driver ready: true, restart count 0 +Oct 19 16:48:50.189: INFO: Container csi-liveness-probe ready: true, restart count 0 +Oct 19 16:48:50.189: INFO: Container csi-node-driver-registrar ready: true, restart count 0 +Oct 19 16:48:50.189: INFO: kube-proxy-hgtmc from kube-system started at 2021-10-19 15:47:27 +0000 UTC (2 container statuses recorded) +Oct 19 16:48:50.189: INFO: Container conntrack-fix ready: true, restart count 0 +Oct 19 16:48:50.189: INFO: Container kube-proxy ready: true, restart count 0 +Oct 19 16:48:50.189: INFO: node-exporter-v9h4r from kube-system started at 2021-10-19 15:45:29 +0000 UTC (1 container statuses recorded) +Oct 19 16:48:50.189: INFO: Container node-exporter ready: true, restart count 0 +Oct 19 16:48:50.189: INFO: node-problem-detector-2s6bt from kube-system started at 2021-10-19 16:11:27 +0000 UTC (1 container statuses recorded) +Oct 19 16:48:50.189: INFO: Container node-problem-detector ready: true, restart count 0 +Oct 19 16:48:50.189: INFO: +Logging pods the apiserver thinks is on node shoot--it--tmhay-ddd-worker-1-z1-67558-zh8gq before test +Oct 19 16:48:50.196: INFO: addons-nginx-ingress-nginx-ingress-k8s-backend-56d9d84c8c-ftj5w from kube-system started at 2021-10-19 15:45:49 +0000 UTC (1 container statuses recorded) +Oct 19 16:48:50.196: INFO: Container nginx-ingress-nginx-ingress-k8s-backend ready: true, restart count 0 +Oct 19 16:48:50.196: INFO: apiserver-proxy-r6qsz from kube-system started at 2021-10-19 15:45:29 +0000 UTC (2 container statuses recorded) +Oct 19 16:48:50.196: INFO: Container proxy ready: true, restart count 0 +Oct 19 16:48:50.196: INFO: Container sidecar ready: true, restart count 0 +Oct 19 16:48:50.196: INFO: calico-node-54s6z from kube-system started at 2021-10-19 15:46:29 +0000 UTC (1 container statuses recorded) +Oct 19 16:48:50.196: INFO: Container calico-node ready: true, restart count 0 +Oct 19 16:48:50.196: INFO: calico-node-vertical-autoscaler-785b5f968-w77tx from kube-system started at 2021-10-19 15:45:49 +0000 UTC (1 container statuses recorded) +Oct 19 16:48:50.196: INFO: Container autoscaler ready: true, restart count 0 +Oct 19 16:48:50.196: INFO: calico-typha-horizontal-autoscaler-5b58bb446c-bqq7q from kube-system started at 2021-10-19 15:45:49 +0000 UTC (1 container statuses recorded) +Oct 19 16:48:50.196: INFO: Container autoscaler ready: true, restart count 0 +Oct 19 16:48:50.196: INFO: calico-typha-vertical-autoscaler-5c9655cddd-w2d9c from kube-system started at 2021-10-19 15:45:49 +0000 UTC (1 container statuses recorded) +Oct 19 16:48:50.196: INFO: Container autoscaler ready: true, restart count 0 +Oct 19 16:48:50.196: INFO: coredns-9866fb499-7zgkw from kube-system started at 2021-10-19 15:45:49 +0000 UTC (1 container statuses recorded) +Oct 19 16:48:50.196: INFO: Container coredns ready: true, restart count 0 +Oct 19 16:48:50.196: INFO: coredns-9866fb499-kcm5k from kube-system started at 2021-10-19 15:45:49 +0000 UTC (1 container statuses recorded) +Oct 19 16:48:50.196: INFO: Container coredns ready: true, restart count 0 +Oct 19 16:48:50.196: INFO: csi-driver-node-ps5fs from kube-system started at 2021-10-19 15:45:29 +0000 UTC (3 container statuses recorded) +Oct 19 16:48:50.196: INFO: Container csi-driver ready: true, restart count 0 +Oct 19 16:48:50.196: INFO: Container csi-liveness-probe ready: true, restart count 0 +Oct 19 16:48:50.196: INFO: Container csi-node-driver-registrar ready: true, restart count 0 +Oct 19 16:48:50.196: INFO: kube-proxy-dpksr from kube-system started at 2021-10-19 15:47:27 +0000 UTC (2 container statuses recorded) +Oct 19 16:48:50.196: INFO: Container conntrack-fix ready: true, restart count 0 +Oct 19 16:48:50.196: INFO: Container kube-proxy ready: true, restart count 0 +Oct 19 16:48:50.196: INFO: metrics-server-7958497998-bdvjq from kube-system started at 2021-10-19 15:45:49 +0000 UTC (1 container statuses recorded) +Oct 19 16:48:50.196: INFO: Container metrics-server ready: true, restart count 0 +Oct 19 16:48:50.196: INFO: node-exporter-2xtzn from kube-system started at 2021-10-19 15:45:29 +0000 UTC (1 container statuses recorded) +Oct 19 16:48:50.196: INFO: Container node-exporter ready: true, restart count 0 +Oct 19 16:48:50.196: INFO: node-problem-detector-6n9vb from kube-system started at 2021-10-19 16:11:28 +0000 UTC (1 container statuses recorded) +Oct 19 16:48:50.196: INFO: Container node-problem-detector ready: true, restart count 0 +Oct 19 16:48:50.196: INFO: vpn-shoot-6cdd4985bc-w7qgp from kube-system started at 2021-10-19 15:45:49 +0000 UTC (1 container statuses recorded) +Oct 19 16:48:50.196: INFO: Container vpn-shoot ready: true, restart count 0 +Oct 19 16:48:50.196: INFO: dashboard-metrics-scraper-7ccbfc448f-htlbk from kubernetes-dashboard started at 2021-10-19 15:45:49 +0000 UTC (1 container statuses recorded) +Oct 19 16:48:50.196: INFO: Container dashboard-metrics-scraper ready: true, restart count 0 +Oct 19 16:48:50.196: INFO: kubernetes-dashboard-847f4ffdcd-6s4nf from kubernetes-dashboard started at 2021-10-19 15:45:49 +0000 UTC (1 container statuses recorded) +Oct 19 16:48:50.196: INFO: Container kubernetes-dashboard ready: true, restart count 2 +[It] validates resource limits of pods that are allowed to run [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: verifying the node has the label node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 +STEP: verifying the node has the label node shoot--it--tmhay-ddd-worker-1-z1-67558-zh8gq +Oct 19 16:48:50.235: INFO: Pod addons-nginx-ingress-controller-6ccd9d5d4d-87wtm requesting resource cpu=100m on Node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 +Oct 19 16:48:50.235: INFO: Pod addons-nginx-ingress-nginx-ingress-k8s-backend-56d9d84c8c-ftj5w requesting resource cpu=0m on Node shoot--it--tmhay-ddd-worker-1-z1-67558-zh8gq +Oct 19 16:48:50.235: INFO: Pod apiserver-proxy-ftftt requesting resource cpu=40m on Node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 +Oct 19 16:48:50.235: INFO: Pod apiserver-proxy-r6qsz requesting resource cpu=40m on Node shoot--it--tmhay-ddd-worker-1-z1-67558-zh8gq +Oct 19 16:48:50.235: INFO: Pod blackbox-exporter-65c549b94c-c5pzd requesting resource cpu=11m on Node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 +Oct 19 16:48:50.235: INFO: Pod calico-kube-controllers-86c64d79ff-hmgq6 requesting resource cpu=10m on Node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 +Oct 19 16:48:50.235: INFO: Pod calico-node-54s6z requesting resource cpu=250m on Node shoot--it--tmhay-ddd-worker-1-z1-67558-zh8gq +Oct 19 16:48:50.235: INFO: Pod calico-node-gkqll requesting resource cpu=250m on Node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 +Oct 19 16:48:50.235: INFO: Pod calico-node-vertical-autoscaler-785b5f968-w77tx requesting resource cpu=10m on Node shoot--it--tmhay-ddd-worker-1-z1-67558-zh8gq +Oct 19 16:48:50.235: INFO: Pod calico-typha-deploy-58b94ff46-kljnn requesting resource cpu=200m on Node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 +Oct 19 16:48:50.235: INFO: Pod calico-typha-horizontal-autoscaler-5b58bb446c-bqq7q requesting resource cpu=10m on Node shoot--it--tmhay-ddd-worker-1-z1-67558-zh8gq +Oct 19 16:48:50.235: INFO: Pod calico-typha-vertical-autoscaler-5c9655cddd-w2d9c requesting resource cpu=10m on Node shoot--it--tmhay-ddd-worker-1-z1-67558-zh8gq +Oct 19 16:48:50.235: INFO: Pod coredns-9866fb499-7zgkw requesting resource cpu=50m on Node shoot--it--tmhay-ddd-worker-1-z1-67558-zh8gq +Oct 19 16:48:50.235: INFO: Pod coredns-9866fb499-kcm5k requesting resource cpu=50m on Node shoot--it--tmhay-ddd-worker-1-z1-67558-zh8gq +Oct 19 16:48:50.235: INFO: Pod csi-driver-node-ps5fs requesting resource cpu=40m on Node shoot--it--tmhay-ddd-worker-1-z1-67558-zh8gq +Oct 19 16:48:50.235: INFO: Pod csi-driver-node-twl5g requesting resource cpu=40m on Node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 +Oct 19 16:48:50.235: INFO: Pod kube-proxy-dpksr requesting resource cpu=34m on Node shoot--it--tmhay-ddd-worker-1-z1-67558-zh8gq +Oct 19 16:48:50.235: INFO: Pod kube-proxy-hgtmc requesting resource cpu=34m on Node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 +Oct 19 16:48:50.235: INFO: Pod metrics-server-7958497998-bdvjq requesting resource cpu=50m on Node shoot--it--tmhay-ddd-worker-1-z1-67558-zh8gq +Oct 19 16:48:50.235: INFO: Pod node-exporter-2xtzn requesting resource cpu=50m on Node shoot--it--tmhay-ddd-worker-1-z1-67558-zh8gq +Oct 19 16:48:50.235: INFO: Pod node-exporter-v9h4r requesting resource cpu=50m on Node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 +Oct 19 16:48:50.235: INFO: Pod node-problem-detector-2s6bt requesting resource cpu=11m on Node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 +Oct 19 16:48:50.235: INFO: Pod node-problem-detector-6n9vb requesting resource cpu=11m on Node shoot--it--tmhay-ddd-worker-1-z1-67558-zh8gq +Oct 19 16:48:50.235: INFO: Pod vpn-shoot-6cdd4985bc-w7qgp requesting resource cpu=100m on Node shoot--it--tmhay-ddd-worker-1-z1-67558-zh8gq +Oct 19 16:48:50.235: INFO: Pod dashboard-metrics-scraper-7ccbfc448f-htlbk requesting resource cpu=0m on Node shoot--it--tmhay-ddd-worker-1-z1-67558-zh8gq +Oct 19 16:48:50.235: INFO: Pod kubernetes-dashboard-847f4ffdcd-6s4nf requesting resource cpu=50m on Node shoot--it--tmhay-ddd-worker-1-z1-67558-zh8gq +STEP: Starting Pods to consume most of the cluster CPU. +Oct 19 16:48:50.235: INFO: Creating a pod which consumes cpu=821m on Node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 +Oct 19 16:48:50.244: INFO: Creating a pod which consumes cpu=815m on Node shoot--it--tmhay-ddd-worker-1-z1-67558-zh8gq +STEP: Creating another pod that requires unavailable amount of CPU. +STEP: Considering event: +Type = [Normal], Name = [filler-pod-b6a7f7cc-c710-435d-b1f0-3cf8ef5d4ffe.16af7c8bf18a4365], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5853/filler-pod-b6a7f7cc-c710-435d-b1f0-3cf8ef5d4ffe to shoot--it--tmhay-ddd-worker-1-z1-67558-zh8gq] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-b6a7f7cc-c710-435d-b1f0-3cf8ef5d4ffe.16af7c8c128e15c6], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.5" already present on machine] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-b6a7f7cc-c710-435d-b1f0-3cf8ef5d4ffe.16af7c8c1424bc78], Reason = [Created], Message = [Created container filler-pod-b6a7f7cc-c710-435d-b1f0-3cf8ef5d4ffe] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-b6a7f7cc-c710-435d-b1f0-3cf8ef5d4ffe.16af7c8c17495431], Reason = [Started], Message = [Started container filler-pod-b6a7f7cc-c710-435d-b1f0-3cf8ef5d4ffe] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-f4cc41b0-d0fc-4ab1-b81d-af3995ba37cf.16af7c8bf11ac0b0], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5853/filler-pod-f4cc41b0-d0fc-4ab1-b81d-af3995ba37cf to shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-f4cc41b0-d0fc-4ab1-b81d-af3995ba37cf.16af7c8c0f3d1295], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.5" already present on machine] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-f4cc41b0-d0fc-4ab1-b81d-af3995ba37cf.16af7c8c10a258ae], Reason = [Created], Message = [Created container filler-pod-f4cc41b0-d0fc-4ab1-b81d-af3995ba37cf] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-f4cc41b0-d0fc-4ab1-b81d-af3995ba37cf.16af7c8c137ee7ab], Reason = [Started], Message = [Started container filler-pod-f4cc41b0-d0fc-4ab1-b81d-af3995ba37cf] +STEP: Considering event: +Type = [Warning], Name = [additional-pod.16af7c8c6a423eac], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.] +STEP: removing the label node off the node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 +STEP: verifying the node doesn't have the label node +STEP: removing the label node off the node shoot--it--tmhay-ddd-worker-1-z1-67558-zh8gq +STEP: verifying the node doesn't have the label node +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:48:53.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-pred-5853" for this suite. +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 +•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":346,"completed":164,"skipped":3111,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Watchers + should be able to start watching from a specific resource version [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:48:53.319: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename watch +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in watch-1601 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to start watching from a specific resource version [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a new configmap +STEP: modifying the configmap once +STEP: modifying the configmap a second time +STEP: deleting the configmap +STEP: creating a watch on configmaps from the resource version returned by the first update +STEP: Expecting to observe notifications for all changes to the configmap after the first update +Oct 19 16:48:53.485: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-1601 d7f00be5-db18-4797-8d93-195f8081103f 26136 0 2021-10-19 16:48:53 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2021-10-19 16:48:53 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 19 16:48:53.485: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-1601 d7f00be5-db18-4797-8d93-195f8081103f 26137 0 2021-10-19 16:48:53 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2021-10-19 16:48:53 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +[AfterEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:48:53.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "watch-1601" for this suite. +•{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":346,"completed":165,"skipped":3124,"failed":0} +SSSSS +------------------------------ +[sig-node] Pods + should contain environment variables for services [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:48:53.493: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename pods +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-5391 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:188 +[It] should contain environment variables for services [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 19 16:48:53.639: INFO: The status of Pod server-envvars-47b81e70-7ade-4b38-85d7-256c0aec631c is Pending, waiting for it to be Running (with Ready = true) +Oct 19 16:48:55.646: INFO: The status of Pod server-envvars-47b81e70-7ade-4b38-85d7-256c0aec631c is Running (Ready = true) +Oct 19 16:48:55.665: INFO: Waiting up to 5m0s for pod "client-envvars-18b3cb81-c4c5-49c3-9786-aceca154ea0b" in namespace "pods-5391" to be "Succeeded or Failed" +Oct 19 16:48:55.668: INFO: Pod "client-envvars-18b3cb81-c4c5-49c3-9786-aceca154ea0b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.228283ms +Oct 19 16:48:57.674: INFO: Pod "client-envvars-18b3cb81-c4c5-49c3-9786-aceca154ea0b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00847096s +STEP: Saw pod success +Oct 19 16:48:57.674: INFO: Pod "client-envvars-18b3cb81-c4c5-49c3-9786-aceca154ea0b" satisfied condition "Succeeded or Failed" +Oct 19 16:48:57.677: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod client-envvars-18b3cb81-c4c5-49c3-9786-aceca154ea0b container env3cont: +STEP: delete the pod +Oct 19 16:48:57.692: INFO: Waiting for pod client-envvars-18b3cb81-c4c5-49c3-9786-aceca154ea0b to disappear +Oct 19 16:48:57.697: INFO: Pod client-envvars-18b3cb81-c4c5-49c3-9786-aceca154ea0b no longer exists +[AfterEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:48:57.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-5391" for this suite. +•{"msg":"PASSED [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":346,"completed":166,"skipped":3129,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:48:57.706: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-5169 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name projected-configmap-test-volume-map-e5d7fa88-2989-4d28-a9c2-8fe882d165e6 +STEP: Creating a pod to test consume configMaps +Oct 19 16:48:57.854: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-571a03b0-cc3f-46ac-a9fb-272dc66ac6c8" in namespace "projected-5169" to be "Succeeded or Failed" +Oct 19 16:48:57.858: INFO: Pod "pod-projected-configmaps-571a03b0-cc3f-46ac-a9fb-272dc66ac6c8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.388924ms +Oct 19 16:48:59.862: INFO: Pod "pod-projected-configmaps-571a03b0-cc3f-46ac-a9fb-272dc66ac6c8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007448442s +STEP: Saw pod success +Oct 19 16:48:59.862: INFO: Pod "pod-projected-configmaps-571a03b0-cc3f-46ac-a9fb-272dc66ac6c8" satisfied condition "Succeeded or Failed" +Oct 19 16:48:59.865: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod pod-projected-configmaps-571a03b0-cc3f-46ac-a9fb-272dc66ac6c8 container agnhost-container: +STEP: delete the pod +Oct 19 16:48:59.885: INFO: Waiting for pod pod-projected-configmaps-571a03b0-cc3f-46ac-a9fb-272dc66ac6c8 to disappear +Oct 19 16:48:59.888: INFO: Pod pod-projected-configmaps-571a03b0-cc3f-46ac-a9fb-272dc66ac6c8 no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:48:59.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-5169" for this suite. +•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":167,"skipped":3142,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Job + should delete a job [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Job + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:48:59.898: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename job +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in job-7656 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should delete a job [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a job +STEP: Ensuring active pods == parallelism +STEP: delete a job +STEP: deleting Job.batch foo in namespace job-7656, will wait for the garbage collector to delete the pods +Oct 19 16:49:02.101: INFO: Deleting Job.batch foo took: 4.065333ms +Oct 19 16:49:02.201: INFO: Terminating Job.batch foo pods took: 100.939236ms +STEP: Ensuring job was deleted +[AfterEach] [sig-apps] Job + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:49:34.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "job-7656" for this suite. +•{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":346,"completed":168,"skipped":3194,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for multiple CRDs of same group and version but different kinds [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:49:34.515: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-9245 +STEP: Waiting for a default service account to be provisioned in namespace +[It] works for multiple CRDs of same group and version but different kinds [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation +Oct 19 16:49:34.652: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 19 16:49:37.522: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:49:49.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-9245" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":346,"completed":169,"skipped":3234,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Sysctls [LinuxOnly] [NodeConformance] + should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:36 +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:49:49.387: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename sysctl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sysctl-9980 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:65 +[It] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod with one valid and two invalid sysctls +[AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:49:49.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sysctl-9980" for this suite. +•{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":346,"completed":170,"skipped":3285,"failed":0} +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Secrets + should be consumable from pods in env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:49:49.555: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-6357 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating secret with name secret-test-8a70e98f-8539-4408-8c58-a28c683014f7 +STEP: Creating a pod to test consume secrets +Oct 19 16:49:49.707: INFO: Waiting up to 5m0s for pod "pod-secrets-573a3e54-a5f0-4253-b15b-0812fe912026" in namespace "secrets-6357" to be "Succeeded or Failed" +Oct 19 16:49:49.712: INFO: Pod "pod-secrets-573a3e54-a5f0-4253-b15b-0812fe912026": Phase="Pending", Reason="", readiness=false. Elapsed: 4.599867ms +Oct 19 16:49:51.717: INFO: Pod "pod-secrets-573a3e54-a5f0-4253-b15b-0812fe912026": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010163948s +STEP: Saw pod success +Oct 19 16:49:51.717: INFO: Pod "pod-secrets-573a3e54-a5f0-4253-b15b-0812fe912026" satisfied condition "Succeeded or Failed" +Oct 19 16:49:51.722: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod pod-secrets-573a3e54-a5f0-4253-b15b-0812fe912026 container secret-env-test: +STEP: delete the pod +Oct 19 16:49:51.738: INFO: Waiting for pod pod-secrets-573a3e54-a5f0-4253-b15b-0812fe912026 to disappear +Oct 19 16:49:51.741: INFO: Pod pod-secrets-573a3e54-a5f0-4253-b15b-0812fe912026 no longer exists +[AfterEach] [sig-node] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:49:51.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-6357" for this suite. +•{"msg":"PASSED [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":346,"completed":171,"skipped":3302,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicaSet + Replicaset should have a working scale subresource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:49:51.751: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename replicaset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replicaset-5992 +STEP: Waiting for a default service account to be provisioned in namespace +[It] Replicaset should have a working scale subresource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating replica set "test-rs" that asks for more than the allowed pod quota +Oct 19 16:49:51.895: INFO: Pod name sample-pod: Found 0 pods out of 1 +Oct 19 16:49:56.899: INFO: Pod name sample-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running +STEP: getting scale subresource +STEP: updating a scale subresource +STEP: verifying the replicaset Spec.Replicas was modified +STEP: Patch a scale subresource +[AfterEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:49:56.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replicaset-5992" for this suite. +•{"msg":"PASSED [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]","total":346,"completed":172,"skipped":3316,"failed":0} +SSSS +------------------------------ +[sig-cli] Kubectl client Kubectl replace + should update a single-container pod's image [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:49:56.938: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-3578 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[BeforeEach] Kubectl replace + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1558 +[It] should update a single-container pod's image [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 +Oct 19 16:49:57.082: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-3578 run e2e-test-httpd-pod --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 --pod-running-timeout=2m0s --labels=run=e2e-test-httpd-pod' +Oct 19 16:49:57.147: INFO: stderr: "" +Oct 19 16:49:57.147: INFO: stdout: "pod/e2e-test-httpd-pod created\n" +STEP: verifying the pod e2e-test-httpd-pod is running +STEP: verifying the pod e2e-test-httpd-pod was created +Oct 19 16:50:02.197: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-3578 get pod e2e-test-httpd-pod -o json' +Oct 19 16:50:02.246: INFO: stderr: "" +Oct 19 16:50:02.246: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"annotations\": {\n \"cni.projectcalico.org/podIP\": \"100.96.0.209/32\",\n \"cni.projectcalico.org/podIPs\": \"100.96.0.209/32\",\n \"kubernetes.io/psp\": \"e2e-test-privileged-psp\"\n },\n \"creationTimestamp\": \"2021-10-19T16:49:57Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-3578\",\n \"resourceVersion\": \"26677\",\n \"uid\": \"1b2242cc-c170-4cc2-bbba-92006ffa93e8\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"env\": [\n {\n \"name\": \"KUBERNETES_SERVICE_HOST\",\n \"value\": \"api.tmhay-ddd.it.internal.staging.k8s.ondemand.com\"\n }\n ],\n \"image\": \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"kube-api-access-vvrtb\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"kube-api-access-vvrtb\",\n \"projected\": {\n \"defaultMode\": 420,\n \"sources\": [\n {\n \"serviceAccountToken\": {\n \"expirationSeconds\": 3607,\n \"path\": \"token\"\n }\n },\n {\n \"configMap\": {\n \"items\": [\n {\n \"key\": \"ca.crt\",\n \"path\": \"ca.crt\"\n }\n ],\n \"name\": \"kube-root-ca.crt\"\n }\n },\n {\n \"downwardAPI\": {\n \"items\": [\n {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"metadata.namespace\"\n },\n \"path\": \"namespace\"\n }\n ]\n }\n }\n ]\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-10-19T16:49:57Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-10-19T16:49:58Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-10-19T16:49:58Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-10-19T16:49:57Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://c89596c5b178f74cb286df49c745e255202af7da79cbf6bf53ed2b72f3b4ca5e\",\n \"image\": \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\",\n \"imageID\": \"k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2021-10-19T16:49:57Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.250.1.123\",\n \"phase\": \"Running\",\n \"podIP\": \"100.96.0.209\",\n \"podIPs\": [\n {\n \"ip\": \"100.96.0.209\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2021-10-19T16:49:57Z\"\n }\n}\n" +STEP: replace the image in the pod +Oct 19 16:50:02.246: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-3578 replace -f -' +Oct 19 16:50:02.408: INFO: stderr: "" +Oct 19 16:50:02.408: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" +STEP: verifying the pod e2e-test-httpd-pod has the right image k8s.gcr.io/e2e-test-images/busybox:1.29-1 +[AfterEach] Kubectl replace + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562 +Oct 19 16:50:02.411: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-3578 delete pods e2e-test-httpd-pod' +Oct 19 16:50:04.461: INFO: stderr: "" +Oct 19 16:50:04.461: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:50:04.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-3578" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":346,"completed":173,"skipped":3320,"failed":0} +SSSSSSSSS +------------------------------ +[sig-node] ConfigMap + should run through a ConfigMap lifecycle [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:50:04.470: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-3475 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should run through a ConfigMap lifecycle [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a ConfigMap +STEP: fetching the ConfigMap +STEP: patching the ConfigMap +STEP: listing all ConfigMaps in all namespaces with a label selector +STEP: deleting the ConfigMap by collection with a label selector +STEP: listing all ConfigMaps in test namespace +[AfterEach] [sig-node] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:50:04.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-3475" for this suite. +•{"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":346,"completed":174,"skipped":3329,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:50:04.649: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-4805 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 19 16:50:04.794: INFO: Waiting up to 5m0s for pod "downwardapi-volume-10bb23a4-87a3-4a6e-a872-2ced1a7b9296" in namespace "projected-4805" to be "Succeeded or Failed" +Oct 19 16:50:04.797: INFO: Pod "downwardapi-volume-10bb23a4-87a3-4a6e-a872-2ced1a7b9296": Phase="Pending", Reason="", readiness=false. Elapsed: 3.236121ms +Oct 19 16:50:06.801: INFO: Pod "downwardapi-volume-10bb23a4-87a3-4a6e-a872-2ced1a7b9296": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007302688s +STEP: Saw pod success +Oct 19 16:50:06.801: INFO: Pod "downwardapi-volume-10bb23a4-87a3-4a6e-a872-2ced1a7b9296" satisfied condition "Succeeded or Failed" +Oct 19 16:50:06.805: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod downwardapi-volume-10bb23a4-87a3-4a6e-a872-2ced1a7b9296 container client-container: +STEP: delete the pod +Oct 19 16:50:06.818: INFO: Waiting for pod downwardapi-volume-10bb23a4-87a3-4a6e-a872-2ced1a7b9296 to disappear +Oct 19 16:50:06.821: INFO: Pod downwardapi-volume-10bb23a4-87a3-4a6e-a872-2ced1a7b9296 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:50:06.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-4805" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":175,"skipped":3341,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:50:06.830: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-2120 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name configmap-test-volume-2aa36a3e-cc1e-4665-8b3b-f32aa60c8b84 +STEP: Creating a pod to test consume configMaps +Oct 19 16:50:06.977: INFO: Waiting up to 5m0s for pod "pod-configmaps-7917c0d3-75d4-4827-aad7-706f6fb9f0ee" in namespace "configmap-2120" to be "Succeeded or Failed" +Oct 19 16:50:06.981: INFO: Pod "pod-configmaps-7917c0d3-75d4-4827-aad7-706f6fb9f0ee": Phase="Pending", Reason="", readiness=false. Elapsed: 4.234855ms +Oct 19 16:50:08.985: INFO: Pod "pod-configmaps-7917c0d3-75d4-4827-aad7-706f6fb9f0ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00830815s +STEP: Saw pod success +Oct 19 16:50:08.985: INFO: Pod "pod-configmaps-7917c0d3-75d4-4827-aad7-706f6fb9f0ee" satisfied condition "Succeeded or Failed" +Oct 19 16:50:08.989: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod pod-configmaps-7917c0d3-75d4-4827-aad7-706f6fb9f0ee container agnhost-container: +STEP: delete the pod +Oct 19 16:50:09.003: INFO: Waiting for pod pod-configmaps-7917c0d3-75d4-4827-aad7-706f6fb9f0ee to disappear +Oct 19 16:50:09.006: INFO: Pod pod-configmaps-7917c0d3-75d4-4827-aad7-706f6fb9f0ee no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:50:09.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-2120" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":346,"completed":176,"skipped":3376,"failed":0} +SSSSS +------------------------------ +[sig-storage] Downward API volume + should provide podname only [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:50:09.015: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-233 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should provide podname only [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 19 16:50:09.159: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e241e7d9-ba10-45bc-b085-bed8d61bff2e" in namespace "downward-api-233" to be "Succeeded or Failed" +Oct 19 16:50:09.162: INFO: Pod "downwardapi-volume-e241e7d9-ba10-45bc-b085-bed8d61bff2e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.38277ms +Oct 19 16:50:11.166: INFO: Pod "downwardapi-volume-e241e7d9-ba10-45bc-b085-bed8d61bff2e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007567341s +STEP: Saw pod success +Oct 19 16:50:11.166: INFO: Pod "downwardapi-volume-e241e7d9-ba10-45bc-b085-bed8d61bff2e" satisfied condition "Succeeded or Failed" +Oct 19 16:50:11.170: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod downwardapi-volume-e241e7d9-ba10-45bc-b085-bed8d61bff2e container client-container: +STEP: delete the pod +Oct 19 16:50:11.184: INFO: Waiting for pod downwardapi-volume-e241e7d9-ba10-45bc-b085-bed8d61bff2e to disappear +Oct 19 16:50:11.190: INFO: Pod downwardapi-volume-e241e7d9-ba10-45bc-b085-bed8d61bff2e no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:50:11.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-233" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":346,"completed":177,"skipped":3381,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should mutate custom resource with different stored version [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:50:11.211: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-5501 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 19 16:50:11.978: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 19 16:50:15.077: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should mutate custom resource with different stored version [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 19 16:50:15.081: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Registering the mutating webhook for custom resource e2e-test-webhook-3030-crds.webhook.example.com via the AdmissionRegistration API +STEP: Creating a custom resource while v1 is storage version +STEP: Patching Custom Resource Definition to set v2 as storage +STEP: Patching the custom resource while v2 is storage version +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:50:18.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-5501" for this suite. +STEP: Destroying namespace "webhook-5501-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":346,"completed":178,"skipped":3396,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should mutate pod and apply defaults after mutation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:50:18.453: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-9033 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 19 16:50:19.387: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 19 16:50:22.405: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should mutate pod and apply defaults after mutation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Registering the mutating pod webhook via the AdmissionRegistration API +STEP: create a pod that should be updated by the webhook +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:50:22.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-9033" for this suite. +STEP: Destroying namespace "webhook-9033-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":346,"completed":179,"skipped":3411,"failed":0} +S +------------------------------ +[sig-storage] ConfigMap + updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:50:22.560: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-9622 +STEP: Waiting for a default service account to be provisioned in namespace +[It] updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name configmap-test-upd-dd81c875-9cf1-4aac-9f18-3f7fc196a13f +STEP: Creating the pod +Oct 19 16:50:22.718: INFO: The status of Pod pod-configmaps-8383a956-d16d-4f6d-8f10-03009f47157c is Pending, waiting for it to be Running (with Ready = true) +Oct 19 16:50:24.722: INFO: The status of Pod pod-configmaps-8383a956-d16d-4f6d-8f10-03009f47157c is Running (Ready = true) +STEP: Updating configmap configmap-test-upd-dd81c875-9cf1-4aac-9f18-3f7fc196a13f +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:51:31.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-9622" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":180,"skipped":3412,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + should validate Statefulset Status endpoints [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:51:31.160: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename statefulset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-4810 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:92 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:107 +STEP: Creating service test in namespace statefulset-4810 +[It] should validate Statefulset Status endpoints [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating statefulset ss in namespace statefulset-4810 +Oct 19 16:51:31.315: INFO: Found 0 stateful pods, waiting for 1 +Oct 19 16:51:41.324: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: Patch Statefulset to include a label +STEP: Getting /status +Oct 19 16:51:41.342: INFO: StatefulSet ss has Conditions: []v1.StatefulSetCondition(nil) +STEP: updating the StatefulSet Status +Oct 19 16:51:41.350: INFO: updatedStatus.Conditions: []v1.StatefulSetCondition{v1.StatefulSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"E2E", Message:"Set from e2e test"}} +STEP: watching for the statefulset status to be updated +Oct 19 16:51:41.353: INFO: Observed &StatefulSet event: ADDED +Oct 19 16:51:41.353: INFO: Found Statefulset ss in namespace statefulset-4810 with labels: map[e2e:testing] annotations: map[] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} +Oct 19 16:51:41.353: INFO: Statefulset ss has an updated status +STEP: patching the Statefulset Status +Oct 19 16:51:41.353: INFO: Patch payload: {"status":{"conditions":[{"type":"StatusPatched","status":"True"}]}} +Oct 19 16:51:41.363: INFO: Patched status conditions: []v1.StatefulSetCondition{v1.StatefulSetCondition{Type:"StatusPatched", Status:"True", LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"", Message:""}} +STEP: watching for the Statefulset status to be patched +Oct 19 16:51:41.367: INFO: Observed &StatefulSet event: ADDED +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118 +Oct 19 16:51:41.367: INFO: Deleting all statefulset in ns statefulset-4810 +Oct 19 16:51:41.370: INFO: Scaling statefulset ss to 0 +Oct 19 16:51:51.391: INFO: Waiting for statefulset status.replicas updated to 0 +Oct 19 16:51:51.395: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:51:51.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-4810" for this suite. +•{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]","total":346,"completed":181,"skipped":3470,"failed":0} +SSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide container's cpu request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:51:51.415: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-6624 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should provide container's cpu request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 19 16:51:51.560: INFO: Waiting up to 5m0s for pod "downwardapi-volume-00a8c669-1a71-4606-a817-7649c472522a" in namespace "projected-6624" to be "Succeeded or Failed" +Oct 19 16:51:51.565: INFO: Pod "downwardapi-volume-00a8c669-1a71-4606-a817-7649c472522a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.745442ms +Oct 19 16:51:53.569: INFO: Pod "downwardapi-volume-00a8c669-1a71-4606-a817-7649c472522a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009211904s +STEP: Saw pod success +Oct 19 16:51:53.569: INFO: Pod "downwardapi-volume-00a8c669-1a71-4606-a817-7649c472522a" satisfied condition "Succeeded or Failed" +Oct 19 16:51:53.573: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod downwardapi-volume-00a8c669-1a71-4606-a817-7649c472522a container client-container: +STEP: delete the pod +Oct 19 16:51:53.628: INFO: Waiting for pod downwardapi-volume-00a8c669-1a71-4606-a817-7649c472522a to disappear +Oct 19 16:51:53.631: INFO: Pod downwardapi-volume-00a8c669-1a71-4606-a817-7649c472522a no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:51:53.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-6624" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":346,"completed":182,"skipped":3473,"failed":0} + +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:51:53.641: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-2815 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name configmap-test-volume-5a72eb2f-f321-4c4e-b15e-bb7769beda0a +STEP: Creating a pod to test consume configMaps +Oct 19 16:51:53.794: INFO: Waiting up to 5m0s for pod "pod-configmaps-f1e3e559-d832-4c85-8207-3e5ab28ba73f" in namespace "configmap-2815" to be "Succeeded or Failed" +Oct 19 16:51:53.797: INFO: Pod "pod-configmaps-f1e3e559-d832-4c85-8207-3e5ab28ba73f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.001052ms +Oct 19 16:51:55.802: INFO: Pod "pod-configmaps-f1e3e559-d832-4c85-8207-3e5ab28ba73f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007813774s +STEP: Saw pod success +Oct 19 16:51:55.802: INFO: Pod "pod-configmaps-f1e3e559-d832-4c85-8207-3e5ab28ba73f" satisfied condition "Succeeded or Failed" +Oct 19 16:51:55.805: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod pod-configmaps-f1e3e559-d832-4c85-8207-3e5ab28ba73f container agnhost-container: +STEP: delete the pod +Oct 19 16:51:55.865: INFO: Waiting for pod pod-configmaps-f1e3e559-d832-4c85-8207-3e5ab28ba73f to disappear +Oct 19 16:51:55.868: INFO: Pod pod-configmaps-f1e3e559-d832-4c85-8207-3e5ab28ba73f no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:51:55.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-2815" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":183,"skipped":3473,"failed":0} +SSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:51:55.876: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-9741 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0644 on node default medium +Oct 19 16:51:56.021: INFO: Waiting up to 5m0s for pod "pod-bcb1b91e-5013-490d-8760-e8d6b2e3dbf4" in namespace "emptydir-9741" to be "Succeeded or Failed" +Oct 19 16:51:56.024: INFO: Pod "pod-bcb1b91e-5013-490d-8760-e8d6b2e3dbf4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.158345ms +Oct 19 16:51:58.028: INFO: Pod "pod-bcb1b91e-5013-490d-8760-e8d6b2e3dbf4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007023261s +STEP: Saw pod success +Oct 19 16:51:58.028: INFO: Pod "pod-bcb1b91e-5013-490d-8760-e8d6b2e3dbf4" satisfied condition "Succeeded or Failed" +Oct 19 16:51:58.031: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod pod-bcb1b91e-5013-490d-8760-e8d6b2e3dbf4 container test-container: +STEP: delete the pod +Oct 19 16:51:58.087: INFO: Waiting for pod pod-bcb1b91e-5013-490d-8760-e8d6b2e3dbf4 to disappear +Oct 19 16:51:58.090: INFO: Pod pod-bcb1b91e-5013-490d-8760-e8d6b2e3dbf4 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:51:58.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-9741" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":184,"skipped":3482,"failed":0} + +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:51:58.099: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-7624 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0777 on tmpfs +Oct 19 16:51:58.243: INFO: Waiting up to 5m0s for pod "pod-e88591fd-7347-48cc-9e8b-e8c546ce6338" in namespace "emptydir-7624" to be "Succeeded or Failed" +Oct 19 16:51:58.247: INFO: Pod "pod-e88591fd-7347-48cc-9e8b-e8c546ce6338": Phase="Pending", Reason="", readiness=false. Elapsed: 3.696934ms +Oct 19 16:52:00.250: INFO: Pod "pod-e88591fd-7347-48cc-9e8b-e8c546ce6338": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007054328s +STEP: Saw pod success +Oct 19 16:52:00.250: INFO: Pod "pod-e88591fd-7347-48cc-9e8b-e8c546ce6338" satisfied condition "Succeeded or Failed" +Oct 19 16:52:00.254: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod pod-e88591fd-7347-48cc-9e8b-e8c546ce6338 container test-container: +STEP: delete the pod +Oct 19 16:52:00.267: INFO: Waiting for pod pod-e88591fd-7347-48cc-9e8b-e8c546ce6338 to disappear +Oct 19 16:52:00.270: INFO: Pod pod-e88591fd-7347-48cc-9e8b-e8c546ce6338 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:52:00.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-7624" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":185,"skipped":3482,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Servers with support for Table transformation + should return a 406 for a backend which does not implement metadata [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Servers with support for Table transformation + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:52:00.280: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename tables +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in tables-993 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] Servers with support for Table transformation + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 +[It] should return a 406 for a backend which does not implement metadata [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-api-machinery] Servers with support for Table transformation + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:52:00.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "tables-993" for this suite. +•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":346,"completed":186,"skipped":3510,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-scheduling] SchedulerPreemption [Serial] + validates basic preemption works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:52:00.426: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename sched-preemption +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-preemption-5946 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 +Oct 19 16:52:00.572: INFO: Waiting up to 1m0s for all nodes to be ready +Oct 19 16:53:00.610: INFO: Waiting for terminating namespaces to be deleted... +[It] validates basic preemption works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Create pods that use 4/5 of node resources. +Oct 19 16:53:00.631: INFO: Created pod: pod0-0-sched-preemption-low-priority +Oct 19 16:53:00.639: INFO: Created pod: pod0-1-sched-preemption-medium-priority +Oct 19 16:53:00.655: INFO: Created pod: pod1-0-sched-preemption-medium-priority +Oct 19 16:53:00.662: INFO: Created pod: pod1-1-sched-preemption-medium-priority +STEP: Wait for pods to be scheduled. +STEP: Run a high priority pod that has same requirements as that of lower priority pod +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:53:06.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-preemption-5946" for this suite. +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 +•{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":346,"completed":187,"skipped":3524,"failed":0} +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should run and stop complex daemon [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:53:06.749: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename daemonsets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in daemonsets-9620 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:142 +[It] should run and stop complex daemon [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 19 16:53:06.899: INFO: Creating daemon "daemon-set" with a node selector +STEP: Initially, daemon pods should not be running on any nodes. +Oct 19 16:53:06.906: INFO: Number of nodes with available pods: 0 +Oct 19 16:53:06.906: INFO: Number of running nodes: 0, number of available pods: 0 +STEP: Change node label to blue, check that daemon pod is launched. +Oct 19 16:53:06.923: INFO: Number of nodes with available pods: 0 +Oct 19 16:53:06.923: INFO: Node shoot--it--tmhay-ddd-worker-1-z1-67558-zh8gq is running more than one daemon pod +Oct 19 16:53:07.927: INFO: Number of nodes with available pods: 1 +Oct 19 16:53:07.927: INFO: Number of running nodes: 1, number of available pods: 1 +STEP: Update the node label to green, and wait for daemons to be unscheduled +Oct 19 16:53:07.944: INFO: Number of nodes with available pods: 1 +Oct 19 16:53:07.944: INFO: Number of running nodes: 0, number of available pods: 1 +Oct 19 16:53:08.949: INFO: Number of nodes with available pods: 0 +Oct 19 16:53:08.949: INFO: Number of running nodes: 0, number of available pods: 0 +STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate +Oct 19 16:53:08.958: INFO: Number of nodes with available pods: 0 +Oct 19 16:53:08.958: INFO: Node shoot--it--tmhay-ddd-worker-1-z1-67558-zh8gq is running more than one daemon pod +Oct 19 16:53:09.962: INFO: Number of nodes with available pods: 0 +Oct 19 16:53:09.962: INFO: Node shoot--it--tmhay-ddd-worker-1-z1-67558-zh8gq is running more than one daemon pod +Oct 19 16:53:10.962: INFO: Number of nodes with available pods: 0 +Oct 19 16:53:10.962: INFO: Node shoot--it--tmhay-ddd-worker-1-z1-67558-zh8gq is running more than one daemon pod +Oct 19 16:53:11.971: INFO: Number of nodes with available pods: 1 +Oct 19 16:53:11.971: INFO: Number of running nodes: 1, number of available pods: 1 +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:108 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9620, will wait for the garbage collector to delete the pods +Oct 19 16:53:12.073: INFO: Deleting DaemonSet.extensions daemon-set took: 40.726302ms +Oct 19 16:53:12.174: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.9632ms +Oct 19 16:53:14.878: INFO: Number of nodes with available pods: 0 +Oct 19 16:53:14.878: INFO: Number of running nodes: 0, number of available pods: 0 +Oct 19 16:53:14.881: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"28133"},"items":null} + +Oct 19 16:53:14.884: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"28133"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:53:14.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-9620" for this suite. +•{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":346,"completed":188,"skipped":3546,"failed":0} +SS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:53:14.911: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename resourcequota +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-7748 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Counting existing ResourceQuota +STEP: Creating a ResourceQuota +STEP: Ensuring resource quota status is calculated +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:53:22.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-7748" for this suite. +•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":346,"completed":189,"skipped":3548,"failed":0} +SSSSSS +------------------------------ +[sig-node] Pods Extended Pods Set QOS Class + should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Pods Extended + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:53:22.067: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename pods +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-5428 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] Pods Set QOS Class + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:149 +[It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pod +STEP: submitting the pod to kubernetes +STEP: verifying QOS class is set on the pod +[AfterEach] [sig-node] Pods Extended + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:53:22.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-5428" for this suite. +•{"msg":"PASSED [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":346,"completed":190,"skipped":3554,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:53:22.235: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-9508 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating projection with secret that has name projected-secret-test-map-cfd1c437-5bea-449f-bea9-b6cae86b92a3 +STEP: Creating a pod to test consume secrets +Oct 19 16:53:22.380: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4cc017a6-7805-419e-97cf-da8c244eceee" in namespace "projected-9508" to be "Succeeded or Failed" +Oct 19 16:53:22.383: INFO: Pod "pod-projected-secrets-4cc017a6-7805-419e-97cf-da8c244eceee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.757521ms +Oct 19 16:53:24.387: INFO: Pod "pod-projected-secrets-4cc017a6-7805-419e-97cf-da8c244eceee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00678083s +STEP: Saw pod success +Oct 19 16:53:24.387: INFO: Pod "pod-projected-secrets-4cc017a6-7805-419e-97cf-da8c244eceee" satisfied condition "Succeeded or Failed" +Oct 19 16:53:24.390: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod pod-projected-secrets-4cc017a6-7805-419e-97cf-da8c244eceee container projected-secret-volume-test: +STEP: delete the pod +Oct 19 16:53:24.405: INFO: Waiting for pod pod-projected-secrets-4cc017a6-7805-419e-97cf-da8c244eceee to disappear +Oct 19 16:53:24.408: INFO: Pod pod-projected-secrets-4cc017a6-7805-419e-97cf-da8c244eceee no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:53:24.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-9508" for this suite. +•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":191,"skipped":3594,"failed":0} +SS +------------------------------ +[sig-api-machinery] Garbage collector + should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:53:24.418: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename gc +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-2256 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the deployment +STEP: Wait for the Deployment to create new ReplicaSet +STEP: delete the deployment +STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs +STEP: Gathering metrics +Oct 19 16:53:25.222: INFO: For apiserver_request_total: +For apiserver_request_latency_seconds: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +W1019 16:53:25.222405 4339 metrics_grabber.go:151] Can't find kube-controller-manager pod. Grabbing metrics from kube-controller-manager is disabled. +Oct 19 16:53:25.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-2256" for this suite. +•{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":346,"completed":192,"skipped":3596,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide container's memory limit [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:53:25.231: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-4580 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should provide container's memory limit [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 19 16:53:25.376: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4129e446-d20b-43c2-86a3-80e9bb551e39" in namespace "projected-4580" to be "Succeeded or Failed" +Oct 19 16:53:25.379: INFO: Pod "downwardapi-volume-4129e446-d20b-43c2-86a3-80e9bb551e39": Phase="Pending", Reason="", readiness=false. Elapsed: 3.353412ms +Oct 19 16:53:27.383: INFO: Pod "downwardapi-volume-4129e446-d20b-43c2-86a3-80e9bb551e39": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007698508s +STEP: Saw pod success +Oct 19 16:53:27.383: INFO: Pod "downwardapi-volume-4129e446-d20b-43c2-86a3-80e9bb551e39" satisfied condition "Succeeded or Failed" +Oct 19 16:53:27.386: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod downwardapi-volume-4129e446-d20b-43c2-86a3-80e9bb551e39 container client-container: +STEP: delete the pod +Oct 19 16:53:27.400: INFO: Waiting for pod downwardapi-volume-4129e446-d20b-43c2-86a3-80e9bb551e39 to disappear +Oct 19 16:53:27.403: INFO: Pod downwardapi-volume-4129e446-d20b-43c2-86a3-80e9bb551e39 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:53:27.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-4580" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":346,"completed":193,"skipped":3621,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] DNS + should provide DNS for services [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:53:27.411: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename dns +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-9394 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide DNS for services [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a test headless service +STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9394.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9394.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9394.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9394.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9394.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-9394.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9394.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-9394.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9394.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-9394.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9394.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-9394.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9394.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 16.194.65.100.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/100.65.194.16_udp@PTR;check="$$(dig +tcp +noall +answer +search 16.194.65.100.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/100.65.194.16_tcp@PTR;sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9394.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9394.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9394.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9394.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9394.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-9394.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9394.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-9394.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9394.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-9394.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9394.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-9394.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9394.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 16.194.65.100.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/100.65.194.16_udp@PTR;check="$$(dig +tcp +noall +answer +search 16.194.65.100.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/100.65.194.16_tcp@PTR;sleep 1; done + +STEP: creating a pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Oct 19 16:53:29.636: INFO: Unable to read wheezy_udp@dns-test-service.dns-9394.svc.cluster.local from pod dns-9394/dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814: the server could not find the requested resource (get pods dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814) +Oct 19 16:53:29.644: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9394.svc.cluster.local from pod dns-9394/dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814: the server could not find the requested resource (get pods dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814) +Oct 19 16:53:29.694: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9394.svc.cluster.local from pod dns-9394/dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814: the server could not find the requested resource (get pods dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814) +Oct 19 16:53:29.701: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9394.svc.cluster.local from pod dns-9394/dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814: the server could not find the requested resource (get pods dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814) +Oct 19 16:53:29.751: INFO: Unable to read jessie_udp@dns-test-service.dns-9394.svc.cluster.local from pod dns-9394/dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814: the server could not find the requested resource (get pods dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814) +Oct 19 16:53:29.758: INFO: Unable to read jessie_tcp@dns-test-service.dns-9394.svc.cluster.local from pod dns-9394/dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814: the server could not find the requested resource (get pods dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814) +Oct 19 16:53:29.764: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9394.svc.cluster.local from pod dns-9394/dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814: the server could not find the requested resource (get pods dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814) +Oct 19 16:53:29.769: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9394.svc.cluster.local from pod dns-9394/dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814: the server could not find the requested resource (get pods dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814) +Oct 19 16:53:29.803: INFO: Lookups using dns-9394/dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814 failed for: [wheezy_udp@dns-test-service.dns-9394.svc.cluster.local wheezy_tcp@dns-test-service.dns-9394.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9394.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9394.svc.cluster.local jessie_udp@dns-test-service.dns-9394.svc.cluster.local jessie_tcp@dns-test-service.dns-9394.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9394.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9394.svc.cluster.local] + +Oct 19 16:53:34.810: INFO: Unable to read wheezy_udp@dns-test-service.dns-9394.svc.cluster.local from pod dns-9394/dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814: the server could not find the requested resource (get pods dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814) +Oct 19 16:53:34.815: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9394.svc.cluster.local from pod dns-9394/dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814: the server could not find the requested resource (get pods dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814) +Oct 19 16:53:34.821: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9394.svc.cluster.local from pod dns-9394/dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814: the server could not find the requested resource (get pods dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814) +Oct 19 16:53:34.866: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9394.svc.cluster.local from pod dns-9394/dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814: the server could not find the requested resource (get pods dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814) +Oct 19 16:53:34.949: INFO: Unable to read jessie_udp@dns-test-service.dns-9394.svc.cluster.local from pod dns-9394/dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814: the server could not find the requested resource (get pods dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814) +Oct 19 16:53:34.958: INFO: Unable to read jessie_tcp@dns-test-service.dns-9394.svc.cluster.local from pod dns-9394/dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814: the server could not find the requested resource (get pods dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814) +Oct 19 16:53:34.965: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9394.svc.cluster.local from pod dns-9394/dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814: the server could not find the requested resource (get pods dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814) +Oct 19 16:53:34.975: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9394.svc.cluster.local from pod dns-9394/dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814: the server could not find the requested resource (get pods dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814) +Oct 19 16:53:35.013: INFO: Lookups using dns-9394/dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814 failed for: [wheezy_udp@dns-test-service.dns-9394.svc.cluster.local wheezy_tcp@dns-test-service.dns-9394.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9394.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9394.svc.cluster.local jessie_udp@dns-test-service.dns-9394.svc.cluster.local jessie_tcp@dns-test-service.dns-9394.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9394.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9394.svc.cluster.local] + +Oct 19 16:53:39.811: INFO: Unable to read wheezy_udp@dns-test-service.dns-9394.svc.cluster.local from pod dns-9394/dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814: the server could not find the requested resource (get pods dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814) +Oct 19 16:53:39.817: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9394.svc.cluster.local from pod dns-9394/dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814: the server could not find the requested resource (get pods dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814) +Oct 19 16:53:39.823: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9394.svc.cluster.local from pod dns-9394/dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814: the server could not find the requested resource (get pods dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814) +Oct 19 16:53:39.829: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9394.svc.cluster.local from pod dns-9394/dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814: the server could not find the requested resource (get pods dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814) +Oct 19 16:53:39.911: INFO: Unable to read jessie_udp@dns-test-service.dns-9394.svc.cluster.local from pod dns-9394/dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814: the server could not find the requested resource (get pods dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814) +Oct 19 16:53:39.917: INFO: Unable to read jessie_tcp@dns-test-service.dns-9394.svc.cluster.local from pod dns-9394/dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814: the server could not find the requested resource (get pods dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814) +Oct 19 16:53:39.923: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9394.svc.cluster.local from pod dns-9394/dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814: the server could not find the requested resource (get pods dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814) +Oct 19 16:53:39.928: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9394.svc.cluster.local from pod dns-9394/dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814: the server could not find the requested resource (get pods dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814) +Oct 19 16:53:39.962: INFO: Lookups using dns-9394/dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814 failed for: [wheezy_udp@dns-test-service.dns-9394.svc.cluster.local wheezy_tcp@dns-test-service.dns-9394.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9394.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9394.svc.cluster.local jessie_udp@dns-test-service.dns-9394.svc.cluster.local jessie_tcp@dns-test-service.dns-9394.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9394.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9394.svc.cluster.local] + +Oct 19 16:53:44.811: INFO: Unable to read wheezy_udp@dns-test-service.dns-9394.svc.cluster.local from pod dns-9394/dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814: the server could not find the requested resource (get pods dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814) +Oct 19 16:53:44.816: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9394.svc.cluster.local from pod dns-9394/dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814: the server could not find the requested resource (get pods dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814) +Oct 19 16:53:44.822: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9394.svc.cluster.local from pod dns-9394/dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814: the server could not find the requested resource (get pods dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814) +Oct 19 16:53:44.827: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9394.svc.cluster.local from pod dns-9394/dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814: the server could not find the requested resource (get pods dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814) +Oct 19 16:53:44.904: INFO: Unable to read jessie_udp@dns-test-service.dns-9394.svc.cluster.local from pod dns-9394/dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814: the server could not find the requested resource (get pods dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814) +Oct 19 16:53:44.910: INFO: Unable to read jessie_tcp@dns-test-service.dns-9394.svc.cluster.local from pod dns-9394/dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814: the server could not find the requested resource (get pods dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814) +Oct 19 16:53:44.916: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9394.svc.cluster.local from pod dns-9394/dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814: the server could not find the requested resource (get pods dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814) +Oct 19 16:53:44.921: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9394.svc.cluster.local from pod dns-9394/dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814: the server could not find the requested resource (get pods dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814) +Oct 19 16:53:44.954: INFO: Lookups using dns-9394/dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814 failed for: [wheezy_udp@dns-test-service.dns-9394.svc.cluster.local wheezy_tcp@dns-test-service.dns-9394.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9394.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9394.svc.cluster.local jessie_udp@dns-test-service.dns-9394.svc.cluster.local jessie_tcp@dns-test-service.dns-9394.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9394.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9394.svc.cluster.local] + +Oct 19 16:53:49.811: INFO: Unable to read wheezy_udp@dns-test-service.dns-9394.svc.cluster.local from pod dns-9394/dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814: the server could not find the requested resource (get pods dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814) +Oct 19 16:53:49.817: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9394.svc.cluster.local from pod dns-9394/dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814: the server could not find the requested resource (get pods dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814) +Oct 19 16:53:49.826: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9394.svc.cluster.local from pod dns-9394/dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814: the server could not find the requested resource (get pods dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814) +Oct 19 16:53:49.870: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9394.svc.cluster.local from pod dns-9394/dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814: the server could not find the requested resource (get pods dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814) +Oct 19 16:53:49.909: INFO: Unable to read jessie_udp@dns-test-service.dns-9394.svc.cluster.local from pod dns-9394/dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814: the server could not find the requested resource (get pods dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814) +Oct 19 16:53:49.914: INFO: Unable to read jessie_tcp@dns-test-service.dns-9394.svc.cluster.local from pod dns-9394/dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814: the server could not find the requested resource (get pods dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814) +Oct 19 16:53:49.919: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9394.svc.cluster.local from pod dns-9394/dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814: the server could not find the requested resource (get pods dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814) +Oct 19 16:53:49.924: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9394.svc.cluster.local from pod dns-9394/dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814: the server could not find the requested resource (get pods dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814) +Oct 19 16:53:49.961: INFO: Lookups using dns-9394/dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814 failed for: [wheezy_udp@dns-test-service.dns-9394.svc.cluster.local wheezy_tcp@dns-test-service.dns-9394.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9394.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9394.svc.cluster.local jessie_udp@dns-test-service.dns-9394.svc.cluster.local jessie_tcp@dns-test-service.dns-9394.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9394.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9394.svc.cluster.local] + +Oct 19 16:53:54.813: INFO: Unable to read wheezy_udp@dns-test-service.dns-9394.svc.cluster.local from pod dns-9394/dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814: the server could not find the requested resource (get pods dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814) +Oct 19 16:53:54.819: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9394.svc.cluster.local from pod dns-9394/dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814: the server could not find the requested resource (get pods dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814) +Oct 19 16:53:54.863: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9394.svc.cluster.local from pod dns-9394/dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814: the server could not find the requested resource (get pods dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814) +Oct 19 16:53:54.869: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9394.svc.cluster.local from pod dns-9394/dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814: the server could not find the requested resource (get pods dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814) +Oct 19 16:53:54.914: INFO: Unable to read jessie_udp@dns-test-service.dns-9394.svc.cluster.local from pod dns-9394/dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814: the server could not find the requested resource (get pods dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814) +Oct 19 16:53:54.919: INFO: Unable to read jessie_tcp@dns-test-service.dns-9394.svc.cluster.local from pod dns-9394/dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814: the server could not find the requested resource (get pods dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814) +Oct 19 16:53:54.925: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9394.svc.cluster.local from pod dns-9394/dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814: the server could not find the requested resource (get pods dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814) +Oct 19 16:53:54.931: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9394.svc.cluster.local from pod dns-9394/dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814: the server could not find the requested resource (get pods dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814) +Oct 19 16:53:54.970: INFO: Lookups using dns-9394/dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814 failed for: [wheezy_udp@dns-test-service.dns-9394.svc.cluster.local wheezy_tcp@dns-test-service.dns-9394.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9394.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9394.svc.cluster.local jessie_udp@dns-test-service.dns-9394.svc.cluster.local jessie_tcp@dns-test-service.dns-9394.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9394.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9394.svc.cluster.local] + +Oct 19 16:53:59.956: INFO: DNS probes using dns-9394/dns-test-fb922dbb-a09f-4a92-bd10-2dd1c28a0814 succeeded + +STEP: deleting the pod +STEP: deleting the test service +STEP: deleting the test headless service +[AfterEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:53:59.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-9394" for this suite. +•{"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":346,"completed":194,"skipped":3656,"failed":0} +S +------------------------------ +[sig-auth] ServiceAccounts + should run through the lifecycle of a ServiceAccount [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:53:59.994: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename svcaccounts +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in svcaccounts-3752 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should run through the lifecycle of a ServiceAccount [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a ServiceAccount +STEP: watching for the ServiceAccount to be added +STEP: patching the ServiceAccount +STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) +STEP: deleting the ServiceAccount +[AfterEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:54:00.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svcaccounts-3752" for this suite. +•{"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":346,"completed":195,"skipped":3657,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicationController + should adopt matching pods on creation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:54:00.155: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename replication-controller +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replication-controller-1947 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 +[It] should adopt matching pods on creation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Given a Pod with a 'name' label pod-adoption is created +Oct 19 16:54:00.300: INFO: The status of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) +Oct 19 16:54:02.304: INFO: The status of Pod pod-adoption is Running (Ready = true) +STEP: When a replication controller with a matching selector is created +STEP: Then the orphan pod is adopted +[AfterEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:54:03.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replication-controller-1947" for this suite. +•{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":346,"completed":196,"skipped":3695,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:54:03.330: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-8777 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 19 16:54:03.475: INFO: Waiting up to 5m0s for pod "downwardapi-volume-900d9d99-6f84-48a2-9fec-6fc1cb7dbb72" in namespace "projected-8777" to be "Succeeded or Failed" +Oct 19 16:54:03.478: INFO: Pod "downwardapi-volume-900d9d99-6f84-48a2-9fec-6fc1cb7dbb72": Phase="Pending", Reason="", readiness=false. Elapsed: 3.301698ms +Oct 19 16:54:05.482: INFO: Pod "downwardapi-volume-900d9d99-6f84-48a2-9fec-6fc1cb7dbb72": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007433235s +STEP: Saw pod success +Oct 19 16:54:05.483: INFO: Pod "downwardapi-volume-900d9d99-6f84-48a2-9fec-6fc1cb7dbb72" satisfied condition "Succeeded or Failed" +Oct 19 16:54:05.486: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod downwardapi-volume-900d9d99-6f84-48a2-9fec-6fc1cb7dbb72 container client-container: +STEP: delete the pod +Oct 19 16:54:05.501: INFO: Waiting for pod downwardapi-volume-900d9d99-6f84-48a2-9fec-6fc1cb7dbb72 to disappear +Oct 19 16:54:05.504: INFO: Pod downwardapi-volume-900d9d99-6f84-48a2-9fec-6fc1cb7dbb72 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:54:05.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-8777" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":346,"completed":197,"skipped":3726,"failed":0} + +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + patching/updating a validating webhook should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:54:05.513: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-606 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 19 16:54:05.994: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 19 16:54:09.015: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] patching/updating a validating webhook should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a validating webhook configuration +STEP: Creating a configMap that does not comply to the validation webhook rules +STEP: Updating a validating webhook configuration's rules to not include the create operation +STEP: Creating a configMap that does not comply to the validation webhook rules +STEP: Patching a validating webhook configuration's rules to include the create operation +STEP: Creating a configMap that does not comply to the validation webhook rules +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:54:09.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-606" for this suite. +STEP: Destroying namespace "webhook-606-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":346,"completed":198,"skipped":3726,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should be able to deny attaching pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:54:09.219: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-1673 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 19 16:54:09.647: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 19 16:54:12.670: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should be able to deny attaching pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Registering the webhook via the AdmissionRegistration API +STEP: create a pod +STEP: 'kubectl attach' the pod, should be denied by the webhook +Oct 19 16:54:14.772: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=webhook-1673 attach --namespace=webhook-1673 to-be-attached-pod -i -c=container1' +Oct 19 16:54:15.011: INFO: rc: 1 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:54:15.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-1673" for this suite. +STEP: Destroying namespace "webhook-1673-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":346,"completed":199,"skipped":3750,"failed":0} + +------------------------------ +[sig-api-machinery] ResourceQuota + should verify ResourceQuota with terminating scopes. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:54:15.065: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename resourcequota +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-6872 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should verify ResourceQuota with terminating scopes. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a ResourceQuota with terminating scope +STEP: Ensuring ResourceQuota status is calculated +STEP: Creating a ResourceQuota with not terminating scope +STEP: Ensuring ResourceQuota status is calculated +STEP: Creating a long running pod +STEP: Ensuring resource quota with not terminating scope captures the pod usage +STEP: Ensuring resource quota with terminating scope ignored the pod usage +STEP: Deleting the pod +STEP: Ensuring resource quota status released the pod usage +STEP: Creating a terminating pod +STEP: Ensuring resource quota with terminating scope captures the pod usage +STEP: Ensuring resource quota with not terminating scope ignored the pod usage +STEP: Deleting the pod +STEP: Ensuring resource quota status released the pod usage +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:54:31.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-6872" for this suite. +•{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":346,"completed":200,"skipped":3750,"failed":0} +SSSSSS +------------------------------ +[sig-network] Services + should be able to change the type from ClusterIP to ExternalName [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:54:31.286: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-3962 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should be able to change the type from ClusterIP to ExternalName [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-3962 +STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service +STEP: creating service externalsvc in namespace services-3962 +STEP: creating replication controller externalsvc in namespace services-3962 +I1019 16:54:31.439661 4339 runners.go:190] Created replication controller with name: externalsvc, namespace: services-3962, replica count: 2 +I1019 16:54:34.490715 4339 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +STEP: changing the ClusterIP service to type=ExternalName +Oct 19 16:54:34.505: INFO: Creating new exec pod +Oct 19 16:54:36.527: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-3962 exec execpodx26bt -- /bin/sh -x -c nslookup clusterip-service.services-3962.svc.cluster.local' +Oct 19 16:54:36.772: INFO: stderr: "+ nslookup clusterip-service.services-3962.svc.cluster.local\n" +Oct 19 16:54:36.772: INFO: stdout: "Server:\t\t100.64.0.10\nAddress:\t100.64.0.10#53\n\nclusterip-service.services-3962.svc.cluster.local\tcanonical name = externalsvc.services-3962.svc.cluster.local.\nName:\texternalsvc.services-3962.svc.cluster.local\nAddress: 100.66.171.10\n\n" +STEP: deleting ReplicationController externalsvc in namespace services-3962, will wait for the garbage collector to delete the pods +Oct 19 16:54:36.848: INFO: Deleting ReplicationController externalsvc took: 22.260111ms +Oct 19 16:54:36.949: INFO: Terminating ReplicationController externalsvc pods took: 100.794785ms +Oct 19 16:54:39.158: INFO: Cleaning up the ClusterIP to ExternalName test service +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:54:39.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-3962" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":346,"completed":201,"skipped":3756,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a configMap. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:54:39.172: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename resourcequota +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-6613 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should create a ResourceQuota and capture the life of a configMap. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Counting existing ResourceQuota +STEP: Creating a ResourceQuota +STEP: Ensuring resource quota status is calculated +STEP: Creating a ConfigMap +STEP: Ensuring resource quota status captures configMap creation +STEP: Deleting a ConfigMap +STEP: Ensuring resource quota status released usage +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:55:07.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-6613" for this suite. +•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":346,"completed":202,"skipped":3791,"failed":0} +SSSS +------------------------------ +[sig-apps] ReplicationController + should test the lifecycle of a ReplicationController [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:55:07.357: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename replication-controller +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replication-controller-1988 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 +[It] should test the lifecycle of a ReplicationController [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a ReplicationController +STEP: waiting for RC to be added +STEP: waiting for available Replicas +STEP: patching ReplicationController +STEP: waiting for RC to be modified +STEP: patching ReplicationController status +STEP: waiting for RC to be modified +STEP: waiting for available Replicas +STEP: fetching ReplicationController status +STEP: patching ReplicationController scale +STEP: waiting for RC to be modified +STEP: waiting for ReplicationController's scale to be the max amount +STEP: fetching ReplicationController; ensuring that it's patched +STEP: updating ReplicationController status +STEP: waiting for RC to be modified +STEP: listing all ReplicationControllers +STEP: checking that ReplicationController has expected values +STEP: deleting ReplicationControllers by collection +STEP: waiting for ReplicationController to have a DELETED watchEvent +[AfterEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:55:10.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replication-controller-1988" for this suite. +•{"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","total":346,"completed":203,"skipped":3795,"failed":0} +SSSSSSSSS +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:55:10.119: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-6185 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating secret with name secret-test-map-dac77384-d5dd-4864-980a-79f0d0752e6b +STEP: Creating a pod to test consume secrets +Oct 19 16:55:10.264: INFO: Waiting up to 5m0s for pod "pod-secrets-666805c5-f2b0-4a41-a931-9d5db3d96b87" in namespace "secrets-6185" to be "Succeeded or Failed" +Oct 19 16:55:10.268: INFO: Pod "pod-secrets-666805c5-f2b0-4a41-a931-9d5db3d96b87": Phase="Pending", Reason="", readiness=false. Elapsed: 3.884727ms +Oct 19 16:55:12.272: INFO: Pod "pod-secrets-666805c5-f2b0-4a41-a931-9d5db3d96b87": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007869158s +STEP: Saw pod success +Oct 19 16:55:12.272: INFO: Pod "pod-secrets-666805c5-f2b0-4a41-a931-9d5db3d96b87" satisfied condition "Succeeded or Failed" +Oct 19 16:55:12.275: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod pod-secrets-666805c5-f2b0-4a41-a931-9d5db3d96b87 container secret-volume-test: +STEP: delete the pod +Oct 19 16:55:12.288: INFO: Waiting for pod pod-secrets-666805c5-f2b0-4a41-a931-9d5db3d96b87 to disappear +Oct 19 16:55:12.291: INFO: Pod pod-secrets-666805c5-f2b0-4a41-a931-9d5db3d96b87 no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:55:12.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-6185" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":204,"skipped":3804,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] server version + should find the server version [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] server version + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:55:12.300: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename server-version +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in server-version-6329 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should find the server version [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Request ServerVersion +STEP: Confirm major version +Oct 19 16:55:12.438: INFO: Major version: 1 +STEP: Confirm minor version +Oct 19 16:55:12.438: INFO: cleanMinorVersion: 22 +Oct 19 16:55:12.438: INFO: Minor version: 22 +[AfterEach] [sig-api-machinery] server version + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:55:12.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "server-version-6329" for this suite. +•{"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":346,"completed":205,"skipped":3827,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] ConfigMap + should be consumable via environment variable [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:55:12.446: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-3807 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable via environment variable [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap configmap-3807/configmap-test-13ac21df-3d13-4875-8708-b7e2f30065c5 +STEP: Creating a pod to test consume configMaps +Oct 19 16:55:12.594: INFO: Waiting up to 5m0s for pod "pod-configmaps-d50866f6-851a-4826-968b-8b383637e0af" in namespace "configmap-3807" to be "Succeeded or Failed" +Oct 19 16:55:12.599: INFO: Pod "pod-configmaps-d50866f6-851a-4826-968b-8b383637e0af": Phase="Pending", Reason="", readiness=false. Elapsed: 5.017843ms +Oct 19 16:55:14.604: INFO: Pod "pod-configmaps-d50866f6-851a-4826-968b-8b383637e0af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009822249s +STEP: Saw pod success +Oct 19 16:55:14.604: INFO: Pod "pod-configmaps-d50866f6-851a-4826-968b-8b383637e0af" satisfied condition "Succeeded or Failed" +Oct 19 16:55:14.607: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod pod-configmaps-d50866f6-851a-4826-968b-8b383637e0af container env-test: +STEP: delete the pod +Oct 19 16:55:14.621: INFO: Waiting for pod pod-configmaps-d50866f6-851a-4826-968b-8b383637e0af to disappear +Oct 19 16:55:14.624: INFO: Pod pod-configmaps-d50866f6-851a-4826-968b-8b383637e0af no longer exists +[AfterEach] [sig-node] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:55:14.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-3807" for this suite. +•{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":346,"completed":206,"skipped":3870,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Deployment + deployment should delete old replica sets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:55:14.638: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename deployment +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in deployment-8407 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 +[It] deployment should delete old replica sets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 19 16:55:14.799: INFO: Pod name cleanup-pod: Found 0 pods out of 1 +Oct 19 16:55:19.803: INFO: Pod name cleanup-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running +Oct 19 16:55:19.803: INFO: Creating deployment test-cleanup-deployment +STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 +Oct 19 16:55:21.829: INFO: Deployment "test-cleanup-deployment": +&Deployment{ObjectMeta:{test-cleanup-deployment deployment-8407 bc92723f-c922-4b22-9bfa-133f6fdb422c 29392 1 2021-10-19 16:55:19 +0000 UTC map[name:cleanup-pod] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2021-10-19 16:55:19 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-19 16:55:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002f81038 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-10-19 16:55:19 +0000 UTC,LastTransitionTime:2021-10-19 16:55:19 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-cleanup-deployment-5b4d99b59b" has successfully progressed.,LastUpdateTime:2021-10-19 16:55:21 +0000 UTC,LastTransitionTime:2021-10-19 16:55:19 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} + +Oct 19 16:55:21.833: INFO: New ReplicaSet "test-cleanup-deployment-5b4d99b59b" of Deployment "test-cleanup-deployment": +&ReplicaSet{ObjectMeta:{test-cleanup-deployment-5b4d99b59b deployment-8407 beea8636-c48f-49aa-99e5-73d410a7bcfd 29385 1 2021-10-19 16:55:19 +0000 UTC map[name:cleanup-pod pod-template-hash:5b4d99b59b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment bc92723f-c922-4b22-9bfa-133f6fdb422c 0xc002f813f7 0xc002f813f8}] [] [{kube-controller-manager Update apps/v1 2021-10-19 16:55:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc92723f-c922-4b22-9bfa-133f6fdb422c\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-19 16:55:21 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 5b4d99b59b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:5b4d99b59b] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002f814a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} +Oct 19 16:55:21.836: INFO: Pod "test-cleanup-deployment-5b4d99b59b-brltn" is available: +&Pod{ObjectMeta:{test-cleanup-deployment-5b4d99b59b-brltn test-cleanup-deployment-5b4d99b59b- deployment-8407 5dfae453-cbcb-451a-8117-f73a3d70ac85 29384 0 2021-10-19 16:55:19 +0000 UTC map[name:cleanup-pod pod-template-hash:5b4d99b59b] map[cni.projectcalico.org/podIP:100.96.0.239/32 cni.projectcalico.org/podIPs:100.96.0.239/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet test-cleanup-deployment-5b4d99b59b beea8636-c48f-49aa-99e5-73d410a7bcfd 0xc002f81867 0xc002f81868}] [] [{kube-controller-manager Update v1 2021-10-19 16:55:19 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"beea8636-c48f-49aa-99e5-73d410a7bcfd\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-19 16:55:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-19 16:55:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.0.239\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-6zjqc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmhay-ddd.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6zjqc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:55:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:55:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:55:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 16:55:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.1.123,PodIP:100.96.0.239,StartTime:2021-10-19 16:55:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-19 16:55:20 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:containerd://5ea7eb8a92ffd75c82371e3aaa8ecae27a56fc074c9169c8896f1ac0b10255bc,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.0.239,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:55:21.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-8407" for this suite. +•{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":346,"completed":207,"skipped":3894,"failed":0} +SSSSS +------------------------------ +[sig-network] Services + should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:55:21.846: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-7379 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating service in namespace services-7379 +Oct 19 16:55:21.992: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) +Oct 19 16:55:23.997: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true) +Oct 19 16:55:24.000: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-7379 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' +Oct 19 16:55:24.302: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" +Oct 19 16:55:24.302: INFO: stdout: "iptables" +Oct 19 16:55:24.302: INFO: proxyMode: iptables +Oct 19 16:55:24.311: INFO: Waiting for pod kube-proxy-mode-detector to disappear +Oct 19 16:55:24.315: INFO: Pod kube-proxy-mode-detector no longer exists +STEP: creating service affinity-nodeport-timeout in namespace services-7379 +STEP: creating replication controller affinity-nodeport-timeout in namespace services-7379 +I1019 16:55:24.332238 4339 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-7379, replica count: 3 +I1019 16:55:27.383400 4339 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Oct 19 16:55:27.394: INFO: Creating new exec pod +Oct 19 16:55:30.413: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-7379 exec execpod-affinityp8mh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' +Oct 19 16:55:30.663: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" +Oct 19 16:55:30.664: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 19 16:55:30.664: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-7379 exec execpod-affinityp8mh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.64.223.117 80' +Oct 19 16:55:30.889: INFO: stderr: "+ nc -v -t -w 2 100.64.223.117 80\nConnection to 100.64.223.117 80 port [tcp/http] succeeded!\n+ echo hostName\n" +Oct 19 16:55:30.889: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 19 16:55:30.889: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-7379 exec execpod-affinityp8mh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.250.1.123 30883' +Oct 19 16:55:31.109: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.250.1.123 30883\nConnection to 10.250.1.123 30883 port [tcp/*] succeeded!\n" +Oct 19 16:55:31.109: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 19 16:55:31.109: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-7379 exec execpod-affinityp8mh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.250.3.120 30883' +Oct 19 16:55:31.286: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.250.3.120 30883\nConnection to 10.250.3.120 30883 port [tcp/*] succeeded!\n" +Oct 19 16:55:31.286: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 19 16:55:31.286: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-7379 exec execpod-affinityp8mh8 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.250.1.123:30883/ ; done' +Oct 19 16:55:31.546: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:30883/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:30883/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:30883/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:30883/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:30883/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:30883/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:30883/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:30883/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:30883/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:30883/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:30883/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:30883/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:30883/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:30883/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:30883/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:30883/\n" +Oct 19 16:55:31.546: INFO: stdout: "\naffinity-nodeport-timeout-gkkqc\naffinity-nodeport-timeout-gkkqc\naffinity-nodeport-timeout-gkkqc\naffinity-nodeport-timeout-gkkqc\naffinity-nodeport-timeout-gkkqc\naffinity-nodeport-timeout-gkkqc\naffinity-nodeport-timeout-gkkqc\naffinity-nodeport-timeout-gkkqc\naffinity-nodeport-timeout-gkkqc\naffinity-nodeport-timeout-gkkqc\naffinity-nodeport-timeout-gkkqc\naffinity-nodeport-timeout-gkkqc\naffinity-nodeport-timeout-gkkqc\naffinity-nodeport-timeout-gkkqc\naffinity-nodeport-timeout-gkkqc\naffinity-nodeport-timeout-gkkqc" +Oct 19 16:55:31.546: INFO: Received response from host: affinity-nodeport-timeout-gkkqc +Oct 19 16:55:31.546: INFO: Received response from host: affinity-nodeport-timeout-gkkqc +Oct 19 16:55:31.546: INFO: Received response from host: affinity-nodeport-timeout-gkkqc +Oct 19 16:55:31.546: INFO: Received response from host: affinity-nodeport-timeout-gkkqc +Oct 19 16:55:31.546: INFO: Received response from host: affinity-nodeport-timeout-gkkqc +Oct 19 16:55:31.546: INFO: Received response from host: affinity-nodeport-timeout-gkkqc +Oct 19 16:55:31.546: INFO: Received response from host: affinity-nodeport-timeout-gkkqc +Oct 19 16:55:31.546: INFO: Received response from host: affinity-nodeport-timeout-gkkqc +Oct 19 16:55:31.546: INFO: Received response from host: affinity-nodeport-timeout-gkkqc +Oct 19 16:55:31.546: INFO: Received response from host: affinity-nodeport-timeout-gkkqc +Oct 19 16:55:31.546: INFO: Received response from host: affinity-nodeport-timeout-gkkqc +Oct 19 16:55:31.546: INFO: Received response from host: affinity-nodeport-timeout-gkkqc +Oct 19 16:55:31.546: INFO: Received response from host: affinity-nodeport-timeout-gkkqc +Oct 19 16:55:31.546: INFO: Received response from host: affinity-nodeport-timeout-gkkqc +Oct 19 16:55:31.546: INFO: Received response from host: affinity-nodeport-timeout-gkkqc +Oct 19 16:55:31.546: INFO: Received response from host: affinity-nodeport-timeout-gkkqc +Oct 19 16:55:31.546: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-7379 exec execpod-affinityp8mh8 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.250.1.123:30883/' +Oct 19 16:55:31.792: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.250.1.123:30883/\n" +Oct 19 16:55:31.792: INFO: stdout: "affinity-nodeport-timeout-gkkqc" +Oct 19 16:55:51.792: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-7379 exec execpod-affinityp8mh8 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.250.1.123:30883/' +Oct 19 16:55:52.095: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.250.1.123:30883/\n" +Oct 19 16:55:52.095: INFO: stdout: "affinity-nodeport-timeout-v6flx" +Oct 19 16:55:52.095: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-7379, will wait for the garbage collector to delete the pods +Oct 19 16:55:52.158: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 3.915942ms +Oct 19 16:55:52.259: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 100.577207ms +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:55:54.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-7379" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":346,"completed":208,"skipped":3899,"failed":0} +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicaSet + should list and delete a collection of ReplicaSets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:55:54.279: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename replicaset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replicaset-7999 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should list and delete a collection of ReplicaSets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Create a ReplicaSet +STEP: Verify that the required pods have come up +Oct 19 16:55:54.423: INFO: Pod name sample-pod: Found 0 pods out of 3 +Oct 19 16:55:59.427: INFO: Pod name sample-pod: Found 3 pods out of 3 +STEP: ensuring each pod is running +Oct 19 16:55:59.430: INFO: Replica Status: {Replicas:3 FullyLabeledReplicas:3 ReadyReplicas:3 AvailableReplicas:3 ObservedGeneration:1 Conditions:[]} +STEP: Listing all ReplicaSets +STEP: DeleteCollection of the ReplicaSets +STEP: After DeleteCollection verify that ReplicaSets have been deleted +[AfterEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:55:59.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replicaset-7999" for this suite. +•{"msg":"PASSED [sig-apps] ReplicaSet should list and delete a collection of ReplicaSets [Conformance]","total":346,"completed":209,"skipped":3921,"failed":0} +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] DisruptionController + should update/patch PodDisruptionBudget status [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:55:59.449: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename disruption +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in disruption-3487 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 +[It] should update/patch PodDisruptionBudget status [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Waiting for the pdb to be processed +STEP: Updating PodDisruptionBudget status +STEP: Waiting for all pods to be running +Oct 19 16:55:59.616: INFO: running pods: 0 < 1 +STEP: locating a running pod +STEP: Waiting for the pdb to be processed +STEP: Patching PodDisruptionBudget status +STEP: Waiting for the pdb to be processed +[AfterEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:56:01.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "disruption-3487" for this suite. +•{"msg":"PASSED [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]","total":346,"completed":210,"skipped":3940,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:56:01.683: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-3412 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating projection with secret that has name projected-secret-test-6793e87f-50bb-42c0-ab8c-edede24a5e7a +STEP: Creating a pod to test consume secrets +Oct 19 16:56:01.875: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0c68af79-91f6-4ef2-b1fa-d892364b01e5" in namespace "projected-3412" to be "Succeeded or Failed" +Oct 19 16:56:01.897: INFO: Pod "pod-projected-secrets-0c68af79-91f6-4ef2-b1fa-d892364b01e5": Phase="Pending", Reason="", readiness=false. Elapsed: 21.525207ms +Oct 19 16:56:03.902: INFO: Pod "pod-projected-secrets-0c68af79-91f6-4ef2-b1fa-d892364b01e5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.026043615s +STEP: Saw pod success +Oct 19 16:56:03.902: INFO: Pod "pod-projected-secrets-0c68af79-91f6-4ef2-b1fa-d892364b01e5" satisfied condition "Succeeded or Failed" +Oct 19 16:56:03.905: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod pod-projected-secrets-0c68af79-91f6-4ef2-b1fa-d892364b01e5 container projected-secret-volume-test: +STEP: delete the pod +Oct 19 16:56:03.960: INFO: Waiting for pod pod-projected-secrets-0c68af79-91f6-4ef2-b1fa-d892364b01e5 to disappear +Oct 19 16:56:03.963: INFO: Pod pod-projected-secrets-0c68af79-91f6-4ef2-b1fa-d892364b01e5 no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:56:03.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-3412" for this suite. +•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":211,"skipped":3973,"failed":0} +SSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + should have a working scale subresource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:56:03.972: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename statefulset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-8039 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:92 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:107 +STEP: Creating service test in namespace statefulset-8039 +[It] should have a working scale subresource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating statefulset ss in namespace statefulset-8039 +Oct 19 16:56:04.124: INFO: Found 0 stateful pods, waiting for 1 +Oct 19 16:56:14.130: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: getting scale subresource +STEP: updating a scale subresource +STEP: verifying the statefulset Spec.Replicas was modified +STEP: Patch a scale subresource +STEP: verifying the statefulset Spec.Replicas was modified +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118 +Oct 19 16:56:14.154: INFO: Deleting all statefulset in ns statefulset-8039 +Oct 19 16:56:14.157: INFO: Scaling statefulset ss to 0 +Oct 19 16:56:24.200: INFO: Waiting for statefulset status.replicas updated to 0 +Oct 19 16:56:24.203: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:56:24.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-8039" for this suite. +•{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":346,"completed":212,"skipped":3977,"failed":0} +SSSSS +------------------------------ +[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook + should execute poststart http hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Container Lifecycle Hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:56:24.221: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-lifecycle-hook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-lifecycle-hook-4780 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] when create a pod with lifecycle hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 +STEP: create the container to handle the HTTPGet hook request. +Oct 19 16:56:24.369: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) +Oct 19 16:56:26.373: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) +[It] should execute poststart http hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the pod with lifecycle hook +Oct 19 16:56:26.386: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) +Oct 19 16:56:28.391: INFO: The status of Pod pod-with-poststart-http-hook is Running (Ready = true) +STEP: check poststart hook +STEP: delete the pod with lifecycle hook +Oct 19 16:56:28.406: INFO: Waiting for pod pod-with-poststart-http-hook to disappear +Oct 19 16:56:28.415: INFO: Pod pod-with-poststart-http-hook still exists +Oct 19 16:56:30.416: INFO: Waiting for pod pod-with-poststart-http-hook to disappear +Oct 19 16:56:30.419: INFO: Pod pod-with-poststart-http-hook still exists +Oct 19 16:56:32.416: INFO: Waiting for pod pod-with-poststart-http-hook to disappear +Oct 19 16:56:32.419: INFO: Pod pod-with-poststart-http-hook no longer exists +[AfterEach] [sig-node] Container Lifecycle Hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:56:32.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-lifecycle-hook-4780" for this suite. +•{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":346,"completed":213,"skipped":3982,"failed":0} +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-instrumentation] Events + should ensure that an event can be fetched, patched, deleted, and listed [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-instrumentation] Events + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:56:32.429: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename events +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in events-2166 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a test event +STEP: listing all events in all namespaces +STEP: patching the test event +STEP: fetching the test event +STEP: deleting the test event +STEP: listing all events in all namespaces +[AfterEach] [sig-instrumentation] Events + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:56:32.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "events-2166" for this suite. +•{"msg":"PASSED [sig-instrumentation] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":346,"completed":214,"skipped":4001,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Pods + should run through the lifecycle of Pods and PodStatus [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:56:32.599: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename pods +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-9739 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:188 +[It] should run through the lifecycle of Pods and PodStatus [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a Pod with a static label +STEP: watching for Pod to be ready +Oct 19 16:56:32.750: INFO: observed Pod pod-test in namespace pods-9739 in phase Pending with labels: map[test-pod-static:true] & conditions [] +Oct 19 16:56:32.753: INFO: observed Pod pod-test in namespace pods-9739 in phase Pending with labels: map[test-pod-static:true] & conditions [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-19 16:56:32 +0000 UTC }] +Oct 19 16:56:32.780: INFO: observed Pod pod-test in namespace pods-9739 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-19 16:56:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-19 16:56:32 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-19 16:56:32 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-19 16:56:32 +0000 UTC }] +Oct 19 16:56:33.177: INFO: observed Pod pod-test in namespace pods-9739 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-19 16:56:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-19 16:56:32 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-19 16:56:32 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-19 16:56:32 +0000 UTC }] +Oct 19 16:56:34.338: INFO: Found Pod pod-test in namespace pods-9739 in phase Running with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-19 16:56:32 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-10-19 16:56:34 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-10-19 16:56:34 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-19 16:56:32 +0000 UTC }] +STEP: patching the Pod with a new Label and updated data +Oct 19 16:56:34.346: INFO: observed event type ADDED +STEP: getting the Pod and ensuring that it's patched +STEP: replacing the Pod's status Ready condition to False +STEP: check the Pod again to ensure its Ready conditions are False +STEP: deleting the Pod via a Collection with a LabelSelector +STEP: watching for the Pod to be deleted +Oct 19 16:56:34.366: INFO: observed event type ADDED +Oct 19 16:56:34.366: INFO: observed event type MODIFIED +Oct 19 16:56:34.366: INFO: observed event type MODIFIED +Oct 19 16:56:34.366: INFO: observed event type MODIFIED +Oct 19 16:56:34.366: INFO: observed event type MODIFIED +Oct 19 16:56:34.366: INFO: observed event type MODIFIED +Oct 19 16:56:34.366: INFO: observed event type MODIFIED +Oct 19 16:56:34.366: INFO: observed event type MODIFIED +Oct 19 16:56:36.343: INFO: observed event type MODIFIED +Oct 19 16:56:36.484: INFO: observed event type MODIFIED +Oct 19 16:56:37.347: INFO: observed event type MODIFIED +Oct 19 16:56:37.353: INFO: observed event type MODIFIED +[AfterEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:56:37.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-9739" for this suite. +•{"msg":"PASSED [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]","total":346,"completed":215,"skipped":4030,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl label + should update the label on a resource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:56:37.364: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-6064 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[BeforeEach] Kubectl label + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1318 +STEP: creating the pod +Oct 19 16:56:37.501: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-6064 create -f -' +Oct 19 16:56:37.658: INFO: stderr: "" +Oct 19 16:56:37.658: INFO: stdout: "pod/pause created\n" +Oct 19 16:56:37.658: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] +Oct 19 16:56:37.658: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-6064" to be "running and ready" +Oct 19 16:56:37.661: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.749898ms +Oct 19 16:56:39.665: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 2.006657418s +Oct 19 16:56:39.665: INFO: Pod "pause" satisfied condition "running and ready" +Oct 19 16:56:39.665: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] +[It] should update the label on a resource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: adding the label testing-label with value testing-label-value to a pod +Oct 19 16:56:39.665: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-6064 label pods pause testing-label=testing-label-value' +Oct 19 16:56:39.721: INFO: stderr: "" +Oct 19 16:56:39.721: INFO: stdout: "pod/pause labeled\n" +STEP: verifying the pod has the label testing-label with the value testing-label-value +Oct 19 16:56:39.721: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-6064 get pod pause -L testing-label' +Oct 19 16:56:39.769: INFO: stderr: "" +Oct 19 16:56:39.769: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 2s testing-label-value\n" +STEP: removing the label testing-label of a pod +Oct 19 16:56:39.769: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-6064 label pods pause testing-label-' +Oct 19 16:56:39.841: INFO: stderr: "" +Oct 19 16:56:39.841: INFO: stdout: "pod/pause labeled\n" +STEP: verifying the pod doesn't have the label testing-label +Oct 19 16:56:39.841: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-6064 get pod pause -L testing-label' +Oct 19 16:56:39.890: INFO: stderr: "" +Oct 19 16:56:39.890: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 2s \n" +[AfterEach] Kubectl label + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1324 +STEP: using delete to clean up resources +Oct 19 16:56:39.890: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-6064 delete --grace-period=0 --force -f -' +Oct 19 16:56:39.949: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Oct 19 16:56:39.949: INFO: stdout: "pod \"pause\" force deleted\n" +Oct 19 16:56:39.949: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-6064 get rc,svc -l name=pause --no-headers' +Oct 19 16:56:39.999: INFO: stderr: "No resources found in kubectl-6064 namespace.\n" +Oct 19 16:56:39.999: INFO: stdout: "" +Oct 19 16:56:39.999: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-6064 get pods -l name=pause -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' +Oct 19 16:56:40.043: INFO: stderr: "" +Oct 19 16:56:40.044: INFO: stdout: "" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:56:40.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-6064" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":346,"completed":216,"skipped":4053,"failed":0} +SSS +------------------------------ +[sig-node] InitContainer [NodeConformance] + should not start app containers if init containers fail on a RestartAlways pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:56:40.053: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename init-container +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in init-container-5838 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 +[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pod +Oct 19 16:56:40.189: INFO: PodSpec: initContainers in spec.initContainers +Oct 19 16:57:19.445: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-8ef1d248-afdd-4bbd-8521-caa06dcd482c", GenerateName:"", Namespace:"init-container-5838", SelfLink:"", UID:"5f897c4e-3e06-4839-b8e5-5aa8e1984859", ResourceVersion:"30345", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63770259400, loc:(*time.Location)(0xa09bc80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"189841001"}, Annotations:map[string]string{"cni.projectcalico.org/podIP":"100.96.0.252/32", "cni.projectcalico.org/podIPs":"100.96.0.252/32", "kubernetes.io/psp":"e2e-test-privileged-psp"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"calico", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc001df37d0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001df37e8), Subresource:"status"}, v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc001df3800), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001df3818), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc001df3848), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001df3860), Subresource:"status"}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-api-access-vhmd4", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc0025108e0), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"KUBERNETES_SERVICE_HOST", Value:"api.tmhay-ddd.it.internal.staging.k8s.ondemand.com", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-vhmd4", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"KUBERNETES_SERVICE_HOST", Value:"api.tmhay-ddd.it.internal.staging.k8s.ondemand.com", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-vhmd4", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.5", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"KUBERNETES_SERVICE_HOST", Value:"api.tmhay-ddd.it.internal.staging.k8s.ondemand.com", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-vhmd4", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0060d8d08), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002b9b730), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0060d8d80)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0060d8da0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0060d8da8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0060d8dac), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc005daa910), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770259400, loc:(*time.Location)(0xa09bc80)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770259400, loc:(*time.Location)(0xa09bc80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770259400, loc:(*time.Location)(0xa09bc80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770259400, loc:(*time.Location)(0xa09bc80)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.250.1.123", PodIP:"100.96.0.252", PodIPs:[]v1.PodIP{v1.PodIP{IP:"100.96.0.252"}}, StartTime:(*v1.Time)(0xc001df3890), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002b9b810)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002b9b880)}, Ready:false, RestartCount:3, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592", ContainerID:"containerd://39d07dfc175d0b4cc397c9f87bf36cdf7c1aa3f9e49450ffbadd3838c3de61d1", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002510960), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002510940), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.5", ImageID:"", ContainerID:"", Started:(*bool)(0xc0060d8e2f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} +[AfterEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:57:19.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "init-container-5838" for this suite. +•{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":346,"completed":217,"skipped":4056,"failed":0} + +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:57:19.454: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-395 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0666 on node default medium +Oct 19 16:57:19.598: INFO: Waiting up to 5m0s for pod "pod-7dbaf83a-3647-4f71-a4c8-34c1f3d9aef7" in namespace "emptydir-395" to be "Succeeded or Failed" +Oct 19 16:57:19.600: INFO: Pod "pod-7dbaf83a-3647-4f71-a4c8-34c1f3d9aef7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.806276ms +Oct 19 16:57:21.605: INFO: Pod "pod-7dbaf83a-3647-4f71-a4c8-34c1f3d9aef7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007415163s +STEP: Saw pod success +Oct 19 16:57:21.605: INFO: Pod "pod-7dbaf83a-3647-4f71-a4c8-34c1f3d9aef7" satisfied condition "Succeeded or Failed" +Oct 19 16:57:21.608: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod pod-7dbaf83a-3647-4f71-a4c8-34c1f3d9aef7 container test-container: +STEP: delete the pod +Oct 19 16:57:21.638: INFO: Waiting for pod pod-7dbaf83a-3647-4f71-a4c8-34c1f3d9aef7 to disappear +Oct 19 16:57:21.641: INFO: Pod pod-7dbaf83a-3647-4f71-a4c8-34c1f3d9aef7 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:57:21.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-395" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":218,"skipped":4056,"failed":0} +SSSSSS +------------------------------ +[sig-storage] Projected secret + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:57:21.650: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-315 +STEP: Waiting for a default service account to be provisioned in namespace +[It] optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating secret with name s-test-opt-del-e3d9ce3f-c54d-4fbc-aea6-ea24ff68851b +STEP: Creating secret with name s-test-opt-upd-7fb9e720-b272-4eb6-8885-f76da511b9f0 +STEP: Creating the pod +Oct 19 16:57:21.842: INFO: The status of Pod pod-projected-secrets-444e8c45-fb68-401a-a72d-8c474dfd5569 is Pending, waiting for it to be Running (with Ready = true) +Oct 19 16:57:23.910: INFO: The status of Pod pod-projected-secrets-444e8c45-fb68-401a-a72d-8c474dfd5569 is Pending, waiting for it to be Running (with Ready = true) +Oct 19 16:57:25.847: INFO: The status of Pod pod-projected-secrets-444e8c45-fb68-401a-a72d-8c474dfd5569 is Running (Ready = true) +STEP: Deleting secret s-test-opt-del-e3d9ce3f-c54d-4fbc-aea6-ea24ff68851b +STEP: Updating secret s-test-opt-upd-7fb9e720-b272-4eb6-8885-f76da511b9f0 +STEP: Creating secret with name s-test-opt-create-ed988ca5-dc2d-465c-b18f-a48ddf06df61 +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:58:40.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-315" for this suite. +•{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":219,"skipped":4062,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with projected pod [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:58:40.551: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename subpath +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in subpath-2818 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] Atomic writer volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 +STEP: Setting up data +[It] should support subpaths with projected pod [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating pod pod-subpath-test-projected-2hqb +STEP: Creating a pod to test atomic-volume-subpath +Oct 19 16:58:40.702: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-2hqb" in namespace "subpath-2818" to be "Succeeded or Failed" +Oct 19 16:58:40.706: INFO: Pod "pod-subpath-test-projected-2hqb": Phase="Pending", Reason="", readiness=false. Elapsed: 3.694649ms +Oct 19 16:58:42.710: INFO: Pod "pod-subpath-test-projected-2hqb": Phase="Running", Reason="", readiness=true. Elapsed: 2.007857413s +Oct 19 16:58:44.716: INFO: Pod "pod-subpath-test-projected-2hqb": Phase="Running", Reason="", readiness=true. Elapsed: 4.013181937s +Oct 19 16:58:46.720: INFO: Pod "pod-subpath-test-projected-2hqb": Phase="Running", Reason="", readiness=true. Elapsed: 6.017689319s +Oct 19 16:58:48.725: INFO: Pod "pod-subpath-test-projected-2hqb": Phase="Running", Reason="", readiness=true. Elapsed: 8.022415483s +Oct 19 16:58:50.730: INFO: Pod "pod-subpath-test-projected-2hqb": Phase="Running", Reason="", readiness=true. Elapsed: 10.027555889s +Oct 19 16:58:52.734: INFO: Pod "pod-subpath-test-projected-2hqb": Phase="Running", Reason="", readiness=true. Elapsed: 12.031747406s +Oct 19 16:58:54.739: INFO: Pod "pod-subpath-test-projected-2hqb": Phase="Running", Reason="", readiness=true. Elapsed: 14.036518917s +Oct 19 16:58:56.744: INFO: Pod "pod-subpath-test-projected-2hqb": Phase="Running", Reason="", readiness=true. Elapsed: 16.041642811s +Oct 19 16:58:58.749: INFO: Pod "pod-subpath-test-projected-2hqb": Phase="Running", Reason="", readiness=true. Elapsed: 18.046674003s +Oct 19 16:59:00.754: INFO: Pod "pod-subpath-test-projected-2hqb": Phase="Running", Reason="", readiness=true. Elapsed: 20.051561644s +Oct 19 16:59:02.759: INFO: Pod "pod-subpath-test-projected-2hqb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.056284048s +STEP: Saw pod success +Oct 19 16:59:02.759: INFO: Pod "pod-subpath-test-projected-2hqb" satisfied condition "Succeeded or Failed" +Oct 19 16:59:02.762: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-zh8gq pod pod-subpath-test-projected-2hqb container test-container-subpath-projected-2hqb: +STEP: delete the pod +Oct 19 16:59:02.781: INFO: Waiting for pod pod-subpath-test-projected-2hqb to disappear +Oct 19 16:59:02.784: INFO: Pod pod-subpath-test-projected-2hqb no longer exists +STEP: Deleting pod pod-subpath-test-projected-2hqb +Oct 19 16:59:02.784: INFO: Deleting pod "pod-subpath-test-projected-2hqb" in namespace "subpath-2818" +[AfterEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:59:02.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "subpath-2818" for this suite. +•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":346,"completed":220,"skipped":4076,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should have session affinity work for NodePort service [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:59:02.796: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-19 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating service in namespace services-19 +STEP: creating service affinity-nodeport in namespace services-19 +STEP: creating replication controller affinity-nodeport in namespace services-19 +I1019 16:59:02.950178 4339 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-19, replica count: 3 +I1019 16:59:06.001655 4339 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Oct 19 16:59:06.012: INFO: Creating new exec pod +Oct 19 16:59:09.079: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-19 exec execpod-affinity86gk6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80' +Oct 19 16:59:09.308: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport 80\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\n" +Oct 19 16:59:09.308: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 19 16:59:09.308: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-19 exec execpod-affinity86gk6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.67.109.84 80' +Oct 19 16:59:09.569: INFO: stderr: "+ nc -v -t -w 2 100.67.109.84 80\n+ echo hostName\nConnection to 100.67.109.84 80 port [tcp/http] succeeded!\n" +Oct 19 16:59:09.569: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 19 16:59:09.569: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-19 exec execpod-affinity86gk6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.250.1.123 31229' +Oct 19 16:59:09.792: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.250.1.123 31229\nConnection to 10.250.1.123 31229 port [tcp/*] succeeded!\n" +Oct 19 16:59:09.792: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 19 16:59:09.792: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-19 exec execpod-affinity86gk6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.250.3.120 31229' +Oct 19 16:59:10.012: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.250.3.120 31229\nConnection to 10.250.3.120 31229 port [tcp/*] succeeded!\n" +Oct 19 16:59:10.012: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 19 16:59:10.012: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-19 exec execpod-affinity86gk6 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.250.1.123:31229/ ; done' +Oct 19 16:59:10.317: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31229/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31229/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31229/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31229/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31229/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31229/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31229/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31229/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31229/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31229/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31229/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31229/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31229/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31229/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31229/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31229/\n" +Oct 19 16:59:10.317: INFO: stdout: "\naffinity-nodeport-dnztk\naffinity-nodeport-dnztk\naffinity-nodeport-dnztk\naffinity-nodeport-dnztk\naffinity-nodeport-dnztk\naffinity-nodeport-dnztk\naffinity-nodeport-dnztk\naffinity-nodeport-dnztk\naffinity-nodeport-dnztk\naffinity-nodeport-dnztk\naffinity-nodeport-dnztk\naffinity-nodeport-dnztk\naffinity-nodeport-dnztk\naffinity-nodeport-dnztk\naffinity-nodeport-dnztk\naffinity-nodeport-dnztk" +Oct 19 16:59:10.317: INFO: Received response from host: affinity-nodeport-dnztk +Oct 19 16:59:10.317: INFO: Received response from host: affinity-nodeport-dnztk +Oct 19 16:59:10.317: INFO: Received response from host: affinity-nodeport-dnztk +Oct 19 16:59:10.317: INFO: Received response from host: affinity-nodeport-dnztk +Oct 19 16:59:10.317: INFO: Received response from host: affinity-nodeport-dnztk +Oct 19 16:59:10.317: INFO: Received response from host: affinity-nodeport-dnztk +Oct 19 16:59:10.317: INFO: Received response from host: affinity-nodeport-dnztk +Oct 19 16:59:10.317: INFO: Received response from host: affinity-nodeport-dnztk +Oct 19 16:59:10.317: INFO: Received response from host: affinity-nodeport-dnztk +Oct 19 16:59:10.317: INFO: Received response from host: affinity-nodeport-dnztk +Oct 19 16:59:10.317: INFO: Received response from host: affinity-nodeport-dnztk +Oct 19 16:59:10.317: INFO: Received response from host: affinity-nodeport-dnztk +Oct 19 16:59:10.317: INFO: Received response from host: affinity-nodeport-dnztk +Oct 19 16:59:10.317: INFO: Received response from host: affinity-nodeport-dnztk +Oct 19 16:59:10.317: INFO: Received response from host: affinity-nodeport-dnztk +Oct 19 16:59:10.317: INFO: Received response from host: affinity-nodeport-dnztk +Oct 19 16:59:10.317: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-nodeport in namespace services-19, will wait for the garbage collector to delete the pods +Oct 19 16:59:10.386: INFO: Deleting ReplicationController affinity-nodeport took: 3.74126ms +Oct 19 16:59:10.486: INFO: Terminating ReplicationController affinity-nodeport pods took: 100.515771ms +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:59:12.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-19" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":346,"completed":221,"skipped":4090,"failed":0} +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should deny crd creation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:59:12.811: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-7563 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 19 16:59:13.212: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 19 16:59:16.245: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should deny crd creation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Registering the crd webhook via the AdmissionRegistration API +STEP: Creating a custom resource definition that should be denied by the webhook +Oct 19 16:59:16.303: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:59:16.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-7563" for this suite. +STEP: Destroying namespace "webhook-7563-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":346,"completed":222,"skipped":4107,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:59:16.434: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-3212 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating secret with name secret-test-0f5ed138-3445-499d-a16f-686280de5396 +STEP: Creating a pod to test consume secrets +Oct 19 16:59:16.580: INFO: Waiting up to 5m0s for pod "pod-secrets-e277a9f3-cf70-4461-a039-6c3f73d11713" in namespace "secrets-3212" to be "Succeeded or Failed" +Oct 19 16:59:16.583: INFO: Pod "pod-secrets-e277a9f3-cf70-4461-a039-6c3f73d11713": Phase="Pending", Reason="", readiness=false. Elapsed: 3.126423ms +Oct 19 16:59:18.587: INFO: Pod "pod-secrets-e277a9f3-cf70-4461-a039-6c3f73d11713": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006821818s +STEP: Saw pod success +Oct 19 16:59:18.587: INFO: Pod "pod-secrets-e277a9f3-cf70-4461-a039-6c3f73d11713" satisfied condition "Succeeded or Failed" +Oct 19 16:59:18.589: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod pod-secrets-e277a9f3-cf70-4461-a039-6c3f73d11713 container secret-volume-test: +STEP: delete the pod +Oct 19 16:59:18.605: INFO: Waiting for pod pod-secrets-e277a9f3-cf70-4461-a039-6c3f73d11713 to disappear +Oct 19 16:59:18.608: INFO: Pod pod-secrets-e277a9f3-cf70-4461-a039-6c3f73d11713 no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:59:18.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-3212" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":346,"completed":223,"skipped":4117,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Secrets + should be immutable if `immutable` field is set [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:59:18.617: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-8501 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be immutable if `immutable` field is set [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:59:18.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-8501" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]","total":346,"completed":224,"skipped":4178,"failed":0} +SSSS +------------------------------ +[sig-node] Downward API + should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:59:18.789: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-1540 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward api env vars +Oct 19 16:59:18.930: INFO: Waiting up to 5m0s for pod "downward-api-4fa8ed73-a554-4a49-b1a4-b34a7b8aa475" in namespace "downward-api-1540" to be "Succeeded or Failed" +Oct 19 16:59:18.935: INFO: Pod "downward-api-4fa8ed73-a554-4a49-b1a4-b34a7b8aa475": Phase="Pending", Reason="", readiness=false. Elapsed: 4.884002ms +Oct 19 16:59:20.939: INFO: Pod "downward-api-4fa8ed73-a554-4a49-b1a4-b34a7b8aa475": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008997265s +STEP: Saw pod success +Oct 19 16:59:20.939: INFO: Pod "downward-api-4fa8ed73-a554-4a49-b1a4-b34a7b8aa475" satisfied condition "Succeeded or Failed" +Oct 19 16:59:20.969: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod downward-api-4fa8ed73-a554-4a49-b1a4-b34a7b8aa475 container dapi-container: +STEP: delete the pod +Oct 19 16:59:20.983: INFO: Waiting for pod downward-api-4fa8ed73-a554-4a49-b1a4-b34a7b8aa475 to disappear +Oct 19 16:59:20.986: INFO: Pod downward-api-4fa8ed73-a554-4a49-b1a4-b34a7b8aa475 no longer exists +[AfterEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:59:20.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-1540" for this suite. +•{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":346,"completed":225,"skipped":4182,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Proxy server + should support proxy with --port 0 [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:59:20.995: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-5845 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should support proxy with --port 0 [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: starting the proxy server +Oct 19 16:59:21.130: INFO: Asynchronously running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-5845 proxy -p 0 --disable-filter' +STEP: curling proxy /api/ output +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:59:21.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-5845" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":346,"completed":226,"skipped":4231,"failed":0} +SSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:59:21.178: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-3344 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name configmap-test-volume-map-f039f143-7b08-4137-aca8-7db9b35594b7 +STEP: Creating a pod to test consume configMaps +Oct 19 16:59:21.323: INFO: Waiting up to 5m0s for pod "pod-configmaps-6703a088-352e-4f7c-bd94-e76a57128e30" in namespace "configmap-3344" to be "Succeeded or Failed" +Oct 19 16:59:21.327: INFO: Pod "pod-configmaps-6703a088-352e-4f7c-bd94-e76a57128e30": Phase="Pending", Reason="", readiness=false. Elapsed: 4.237909ms +Oct 19 16:59:23.331: INFO: Pod "pod-configmaps-6703a088-352e-4f7c-bd94-e76a57128e30": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008310979s +STEP: Saw pod success +Oct 19 16:59:23.331: INFO: Pod "pod-configmaps-6703a088-352e-4f7c-bd94-e76a57128e30" satisfied condition "Succeeded or Failed" +Oct 19 16:59:23.334: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod pod-configmaps-6703a088-352e-4f7c-bd94-e76a57128e30 container agnhost-container: +STEP: delete the pod +Oct 19 16:59:23.388: INFO: Waiting for pod pod-configmaps-6703a088-352e-4f7c-bd94-e76a57128e30 to disappear +Oct 19 16:59:23.391: INFO: Pod pod-configmaps-6703a088-352e-4f7c-bd94-e76a57128e30 no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:59:23.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-3344" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":346,"completed":227,"skipped":4234,"failed":0} +SSSSSSS +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:59:23.399: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-2964 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating secret with name secret-test-937a44a3-9520-4392-a811-2d6e23d03522 +STEP: Creating a pod to test consume secrets +Oct 19 16:59:23.547: INFO: Waiting up to 5m0s for pod "pod-secrets-753346ff-59d1-4a74-80e0-11b348153abf" in namespace "secrets-2964" to be "Succeeded or Failed" +Oct 19 16:59:23.550: INFO: Pod "pod-secrets-753346ff-59d1-4a74-80e0-11b348153abf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.82142ms +Oct 19 16:59:25.554: INFO: Pod "pod-secrets-753346ff-59d1-4a74-80e0-11b348153abf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006420664s +STEP: Saw pod success +Oct 19 16:59:25.554: INFO: Pod "pod-secrets-753346ff-59d1-4a74-80e0-11b348153abf" satisfied condition "Succeeded or Failed" +Oct 19 16:59:25.557: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod pod-secrets-753346ff-59d1-4a74-80e0-11b348153abf container secret-volume-test: +STEP: delete the pod +Oct 19 16:59:25.571: INFO: Waiting for pod pod-secrets-753346ff-59d1-4a74-80e0-11b348153abf to disappear +Oct 19 16:59:25.574: INFO: Pod pod-secrets-753346ff-59d1-4a74-80e0-11b348153abf no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:59:25.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-2964" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":228,"skipped":4241,"failed":0} +SSSSSSSS +------------------------------ +[sig-network] Services + should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:59:25.583: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-9585 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating service in namespace services-9585 +Oct 19 16:59:25.726: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) +Oct 19 16:59:27.731: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true) +Oct 19 16:59:27.734: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-9585 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' +Oct 19 16:59:27.951: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" +Oct 19 16:59:27.951: INFO: stdout: "iptables" +Oct 19 16:59:27.951: INFO: proxyMode: iptables +Oct 19 16:59:27.958: INFO: Waiting for pod kube-proxy-mode-detector to disappear +Oct 19 16:59:27.961: INFO: Pod kube-proxy-mode-detector no longer exists +STEP: creating service affinity-clusterip-timeout in namespace services-9585 +STEP: creating replication controller affinity-clusterip-timeout in namespace services-9585 +I1019 16:59:27.973272 4339 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-9585, replica count: 3 +I1019 16:59:31.024594 4339 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Oct 19 16:59:31.031: INFO: Creating new exec pod +Oct 19 16:59:34.046: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-9585 exec execpod-affinityczzcg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80' +Oct 19 16:59:34.218: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-timeout 80\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\n" +Oct 19 16:59:34.218: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 19 16:59:34.218: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-9585 exec execpod-affinityczzcg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.71.155.17 80' +Oct 19 16:59:34.431: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 100.71.155.17 80\nConnection to 100.71.155.17 80 port [tcp/http] succeeded!\n" +Oct 19 16:59:34.431: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 19 16:59:34.431: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-9585 exec execpod-affinityczzcg -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://100.71.155.17:80/ ; done' +Oct 19 16:59:34.724: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.71.155.17:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.71.155.17:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.71.155.17:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.71.155.17:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.71.155.17:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.71.155.17:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.71.155.17:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.71.155.17:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.71.155.17:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.71.155.17:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.71.155.17:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.71.155.17:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.71.155.17:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.71.155.17:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.71.155.17:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.71.155.17:80/\n" +Oct 19 16:59:34.724: INFO: stdout: "\naffinity-clusterip-timeout-bll5f\naffinity-clusterip-timeout-bll5f\naffinity-clusterip-timeout-bll5f\naffinity-clusterip-timeout-bll5f\naffinity-clusterip-timeout-bll5f\naffinity-clusterip-timeout-bll5f\naffinity-clusterip-timeout-bll5f\naffinity-clusterip-timeout-bll5f\naffinity-clusterip-timeout-bll5f\naffinity-clusterip-timeout-bll5f\naffinity-clusterip-timeout-bll5f\naffinity-clusterip-timeout-bll5f\naffinity-clusterip-timeout-bll5f\naffinity-clusterip-timeout-bll5f\naffinity-clusterip-timeout-bll5f\naffinity-clusterip-timeout-bll5f" +Oct 19 16:59:34.724: INFO: Received response from host: affinity-clusterip-timeout-bll5f +Oct 19 16:59:34.724: INFO: Received response from host: affinity-clusterip-timeout-bll5f +Oct 19 16:59:34.724: INFO: Received response from host: affinity-clusterip-timeout-bll5f +Oct 19 16:59:34.724: INFO: Received response from host: affinity-clusterip-timeout-bll5f +Oct 19 16:59:34.724: INFO: Received response from host: affinity-clusterip-timeout-bll5f +Oct 19 16:59:34.724: INFO: Received response from host: affinity-clusterip-timeout-bll5f +Oct 19 16:59:34.724: INFO: Received response from host: affinity-clusterip-timeout-bll5f +Oct 19 16:59:34.724: INFO: Received response from host: affinity-clusterip-timeout-bll5f +Oct 19 16:59:34.724: INFO: Received response from host: affinity-clusterip-timeout-bll5f +Oct 19 16:59:34.724: INFO: Received response from host: affinity-clusterip-timeout-bll5f +Oct 19 16:59:34.724: INFO: Received response from host: affinity-clusterip-timeout-bll5f +Oct 19 16:59:34.724: INFO: Received response from host: affinity-clusterip-timeout-bll5f +Oct 19 16:59:34.724: INFO: Received response from host: affinity-clusterip-timeout-bll5f +Oct 19 16:59:34.724: INFO: Received response from host: affinity-clusterip-timeout-bll5f +Oct 19 16:59:34.724: INFO: Received response from host: affinity-clusterip-timeout-bll5f +Oct 19 16:59:34.724: INFO: Received response from host: affinity-clusterip-timeout-bll5f +Oct 19 16:59:34.724: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-9585 exec execpod-affinityczzcg -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://100.71.155.17:80/' +Oct 19 16:59:34.915: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://100.71.155.17:80/\n" +Oct 19 16:59:34.915: INFO: stdout: "affinity-clusterip-timeout-bll5f" +Oct 19 16:59:54.915: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-9585 exec execpod-affinityczzcg -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://100.71.155.17:80/' +Oct 19 16:59:55.185: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://100.71.155.17:80/\n" +Oct 19 16:59:55.185: INFO: stdout: "affinity-clusterip-timeout-84hx5" +Oct 19 16:59:55.185: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-9585, will wait for the garbage collector to delete the pods +Oct 19 16:59:55.254: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 4.266793ms +Oct 19 16:59:55.355: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 100.181344ms +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:59:56.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-9585" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":346,"completed":229,"skipped":4249,"failed":0} +SSSSS +------------------------------ +[sig-storage] Secrets + should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:59:57.001: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-7746 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating secret with name secret-test-8a5c317f-16b7-4871-bbc9-2373e74ff2ee +STEP: Creating a pod to test consume secrets +Oct 19 16:59:57.147: INFO: Waiting up to 5m0s for pod "pod-secrets-d23dad26-bfdf-46a9-96b6-e7c44f8a6983" in namespace "secrets-7746" to be "Succeeded or Failed" +Oct 19 16:59:57.151: INFO: Pod "pod-secrets-d23dad26-bfdf-46a9-96b6-e7c44f8a6983": Phase="Pending", Reason="", readiness=false. Elapsed: 3.541706ms +Oct 19 16:59:59.155: INFO: Pod "pod-secrets-d23dad26-bfdf-46a9-96b6-e7c44f8a6983": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007693854s +STEP: Saw pod success +Oct 19 16:59:59.155: INFO: Pod "pod-secrets-d23dad26-bfdf-46a9-96b6-e7c44f8a6983" satisfied condition "Succeeded or Failed" +Oct 19 16:59:59.158: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod pod-secrets-d23dad26-bfdf-46a9-96b6-e7c44f8a6983 container secret-volume-test: +STEP: delete the pod +Oct 19 16:59:59.173: INFO: Waiting for pod pod-secrets-d23dad26-bfdf-46a9-96b6-e7c44f8a6983 to disappear +Oct 19 16:59:59.176: INFO: Pod pod-secrets-d23dad26-bfdf-46a9-96b6-e7c44f8a6983 no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 16:59:59.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-7746" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":346,"completed":230,"skipped":4254,"failed":0} + +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + should perform rolling updates and roll backs of template modifications [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 16:59:59.185: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename statefulset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-166 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:92 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:107 +STEP: Creating service test in namespace statefulset-166 +[It] should perform rolling updates and roll backs of template modifications [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a new StatefulSet +Oct 19 16:59:59.332: INFO: Found 0 stateful pods, waiting for 3 +Oct 19 17:00:09.336: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true +Oct 19 17:00:09.336: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true +Oct 19 17:00:09.336: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true +Oct 19 17:00:09.346: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-166 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Oct 19 17:00:09.564: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Oct 19 17:00:09.564: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Oct 19 17:00:09.564: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +STEP: Updating StatefulSet template: update image from k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 to k8s.gcr.io/e2e-test-images/httpd:2.4.39-1 +Oct 19 17:00:19.601: INFO: Updating stateful set ss2 +STEP: Creating a new revision +STEP: Updating Pods in reverse ordinal order +Oct 19 17:00:29.621: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-166 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 19 17:00:29.853: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Oct 19 17:00:29.853: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Oct 19 17:00:29.853: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +STEP: Rolling back to a previous revision +Oct 19 17:00:39.877: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-166 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Oct 19 17:00:40.143: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Oct 19 17:00:40.143: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Oct 19 17:00:40.143: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Oct 19 17:00:50.176: INFO: Updating stateful set ss2 +STEP: Rolling back update in reverse ordinal order +Oct 19 17:01:00.197: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=statefulset-166 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 19 17:01:00.428: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Oct 19 17:01:00.428: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Oct 19 17:01:00.428: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118 +Oct 19 17:01:10.448: INFO: Deleting all statefulset in ns statefulset-166 +Oct 19 17:01:10.451: INFO: Scaling statefulset ss2 to 0 +Oct 19 17:01:20.470: INFO: Waiting for statefulset status.replicas updated to 0 +Oct 19 17:01:20.473: INFO: Deleting statefulset ss2 +[AfterEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:01:20.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-166" for this suite. +•{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":346,"completed":231,"skipped":4254,"failed":0} +SSSS +------------------------------ +[sig-storage] ConfigMap + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:01:20.491: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-5942 +STEP: Waiting for a default service account to be provisioned in namespace +[It] optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name cm-test-opt-del-b21d9735-0d0a-4cc4-9e3a-7913f0970bd9 +STEP: Creating configMap with name cm-test-opt-upd-5ab115b9-574f-430a-88b5-8ed9f2d45145 +STEP: Creating the pod +Oct 19 17:01:20.652: INFO: The status of Pod pod-configmaps-8f1379ad-a74e-4ccc-99b1-57506f6bc87c is Pending, waiting for it to be Running (with Ready = true) +Oct 19 17:01:22.657: INFO: The status of Pod pod-configmaps-8f1379ad-a74e-4ccc-99b1-57506f6bc87c is Running (Ready = true) +STEP: Deleting configmap cm-test-opt-del-b21d9735-0d0a-4cc4-9e3a-7913f0970bd9 +STEP: Updating configmap cm-test-opt-upd-5ab115b9-574f-430a-88b5-8ed9f2d45145 +STEP: Creating configMap with name cm-test-opt-create-e188eeed-876f-4dfd-bc32-8954c88bcde2 +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:01:24.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-5942" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":232,"skipped":4258,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Security Context When creating a pod with readOnlyRootFilesystem + should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:01:24.844: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename security-context-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-test-4647 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 +[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 19 17:01:24.999: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-9647fa52-f6e4-41d6-956b-8695235472d6" in namespace "security-context-test-4647" to be "Succeeded or Failed" +Oct 19 17:01:25.008: INFO: Pod "busybox-readonly-false-9647fa52-f6e4-41d6-956b-8695235472d6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.650334ms +Oct 19 17:01:27.012: INFO: Pod "busybox-readonly-false-9647fa52-f6e4-41d6-956b-8695235472d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012849707s +Oct 19 17:01:27.012: INFO: Pod "busybox-readonly-false-9647fa52-f6e4-41d6-956b-8695235472d6" satisfied condition "Succeeded or Failed" +[AfterEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:01:27.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "security-context-test-4647" for this suite. +•{"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":346,"completed":233,"skipped":4293,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:01:27.021: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-6548 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 19 17:01:27.162: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f513f0f8-5524-43c6-94d7-83053f3b8e87" in namespace "downward-api-6548" to be "Succeeded or Failed" +Oct 19 17:01:27.165: INFO: Pod "downwardapi-volume-f513f0f8-5524-43c6-94d7-83053f3b8e87": Phase="Pending", Reason="", readiness=false. Elapsed: 3.427301ms +Oct 19 17:01:29.169: INFO: Pod "downwardapi-volume-f513f0f8-5524-43c6-94d7-83053f3b8e87": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006989826s +STEP: Saw pod success +Oct 19 17:01:29.169: INFO: Pod "downwardapi-volume-f513f0f8-5524-43c6-94d7-83053f3b8e87" satisfied condition "Succeeded or Failed" +Oct 19 17:01:29.172: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-zh8gq pod downwardapi-volume-f513f0f8-5524-43c6-94d7-83053f3b8e87 container client-container: +STEP: delete the pod +Oct 19 17:01:29.232: INFO: Waiting for pod downwardapi-volume-f513f0f8-5524-43c6-94d7-83053f3b8e87 to disappear +Oct 19 17:01:29.236: INFO: Pod downwardapi-volume-f513f0f8-5524-43c6-94d7-83053f3b8e87 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:01:29.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-6548" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":234,"skipped":4325,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for CRD preserving unknown fields in an embedded object [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:01:29.246: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-1100 +STEP: Waiting for a default service account to be provisioned in namespace +[It] works for CRD preserving unknown fields in an embedded object [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 19 17:01:29.380: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: client-side validation (kubectl create and apply) allows request with any unknown properties +Oct 19 17:01:32.244: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-1100 --namespace=crd-publish-openapi-1100 create -f -' +Oct 19 17:01:32.553: INFO: stderr: "" +Oct 19 17:01:32.553: INFO: stdout: "e2e-test-crd-publish-openapi-5488-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" +Oct 19 17:01:32.553: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-1100 --namespace=crd-publish-openapi-1100 delete e2e-test-crd-publish-openapi-5488-crds test-cr' +Oct 19 17:01:32.606: INFO: stderr: "" +Oct 19 17:01:32.606: INFO: stdout: "e2e-test-crd-publish-openapi-5488-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" +Oct 19 17:01:32.606: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-1100 --namespace=crd-publish-openapi-1100 apply -f -' +Oct 19 17:01:32.736: INFO: stderr: "" +Oct 19 17:01:32.736: INFO: stdout: "e2e-test-crd-publish-openapi-5488-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" +Oct 19 17:01:32.736: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-1100 --namespace=crd-publish-openapi-1100 delete e2e-test-crd-publish-openapi-5488-crds test-cr' +Oct 19 17:01:32.788: INFO: stderr: "" +Oct 19 17:01:32.788: INFO: stdout: "e2e-test-crd-publish-openapi-5488-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" +STEP: kubectl explain works to explain CR +Oct 19 17:01:32.788: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-1100 explain e2e-test-crd-publish-openapi-5488-crds' +Oct 19 17:01:32.907: INFO: stderr: "" +Oct 19 17:01:32.907: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-5488-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t<>\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:01:35.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-1100" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":346,"completed":235,"skipped":4372,"failed":0} +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Secrets + should patch a secret [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:01:35.772: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-498 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should patch a secret [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a secret +STEP: listing secrets in all namespaces to ensure that there are more than zero +STEP: patching the secret +STEP: deleting the secret using a LabelSelector +STEP: listing secrets in all namespaces, searching for label name and value in patch +[AfterEach] [sig-node] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:01:35.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-498" for this suite. +•{"msg":"PASSED [sig-node] Secrets should patch a secret [Conformance]","total":346,"completed":236,"skipped":4391,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] KubeletManagedEtcHosts + should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] KubeletManagedEtcHosts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:01:35.942: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-kubelet-etc-hosts-2798 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Setting up the test +STEP: Creating hostNetwork=false pod +Oct 19 17:01:36.105: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) +Oct 19 17:01:38.109: INFO: The status of Pod test-pod is Running (Ready = true) +STEP: Creating hostNetwork=true pod +Oct 19 17:01:38.125: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) +Oct 19 17:01:40.128: INFO: The status of Pod test-host-network-pod is Running (Ready = true) +STEP: Running the test +STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false +Oct 19 17:01:40.131: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2798 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 19 17:01:40.131: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 19 17:01:40.291: INFO: Exec stderr: "" +Oct 19 17:01:40.291: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2798 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 19 17:01:40.291: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 19 17:01:40.499: INFO: Exec stderr: "" +Oct 19 17:01:40.499: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2798 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 19 17:01:40.499: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 19 17:01:40.706: INFO: Exec stderr: "" +Oct 19 17:01:40.706: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2798 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 19 17:01:40.706: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 19 17:01:40.867: INFO: Exec stderr: "" +STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount +Oct 19 17:01:40.867: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2798 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 19 17:01:40.867: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 19 17:01:41.123: INFO: Exec stderr: "" +Oct 19 17:01:41.123: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2798 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 19 17:01:41.123: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 19 17:01:41.302: INFO: Exec stderr: "" +STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true +Oct 19 17:01:41.302: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2798 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 19 17:01:41.302: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 19 17:01:41.512: INFO: Exec stderr: "" +Oct 19 17:01:41.512: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2798 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 19 17:01:41.512: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 19 17:01:41.720: INFO: Exec stderr: "" +Oct 19 17:01:41.720: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2798 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 19 17:01:41.720: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 19 17:01:41.927: INFO: Exec stderr: "" +Oct 19 17:01:41.927: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2798 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 19 17:01:41.927: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 19 17:01:42.129: INFO: Exec stderr: "" +[AfterEach] [sig-node] KubeletManagedEtcHosts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:01:42.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-kubelet-etc-hosts-2798" for this suite. +•{"msg":"PASSED [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":237,"skipped":4414,"failed":0} +S +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should include webhook resources in discovery documents [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:01:42.137: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-6394 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 19 17:01:42.591: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 19 17:01:45.610: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should include webhook resources in discovery documents [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: fetching the /apis discovery document +STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document +STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document +STEP: fetching the /apis/admissionregistration.k8s.io discovery document +STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document +STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document +STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:01:45.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-6394" for this suite. +STEP: Destroying namespace "webhook-6394-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":346,"completed":238,"skipped":4415,"failed":0} +SSS +------------------------------ +[sig-node] Sysctls [LinuxOnly] [NodeConformance] + should support sysctls [MinimumKubeletVersion:1.21] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:36 +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:01:45.656: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename sysctl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sysctl-3650 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:65 +[It] should support sysctls [MinimumKubeletVersion:1.21] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod with the kernel.shm_rmid_forced sysctl +STEP: Watching for error events or started pod +STEP: Waiting for pod completion +STEP: Checking that the pod succeeded +STEP: Getting logs from the pod +STEP: Checking that the sysctl is actually updated +[AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:01:47.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sysctl-3650" for this suite. +•{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":346,"completed":239,"skipped":4418,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Probing container + should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:01:47.881: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-probe +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-2227 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 +[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating pod liveness-191bc7f0-2ec4-48e5-bc15-4b76c5fe8a5b in namespace container-probe-2227 +Oct 19 17:01:50.044: INFO: Started pod liveness-191bc7f0-2ec4-48e5-bc15-4b76c5fe8a5b in namespace container-probe-2227 +STEP: checking the pod's current state and verifying that restartCount is present +Oct 19 17:01:50.047: INFO: Initial restart count of pod liveness-191bc7f0-2ec4-48e5-bc15-4b76c5fe8a5b is 0 +Oct 19 17:02:10.100: INFO: Restart count of pod container-probe-2227/liveness-191bc7f0-2ec4-48e5-bc15-4b76c5fe8a5b is now 1 (20.052177148s elapsed) +STEP: deleting the pod +[AfterEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:02:10.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-2227" for this suite. +•{"msg":"PASSED [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":346,"completed":240,"skipped":4443,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:02:10.185: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-7799 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating service in namespace services-7799 +STEP: creating service affinity-clusterip-transition in namespace services-7799 +STEP: creating replication controller affinity-clusterip-transition in namespace services-7799 +I1019 17:02:10.467370 4339 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-7799, replica count: 3 +I1019 17:02:13.518389 4339 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Oct 19 17:02:13.524: INFO: Creating new exec pod +Oct 19 17:02:16.538: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-7799 exec execpod-affinityqgjrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' +Oct 19 17:02:16.825: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" +Oct 19 17:02:16.825: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 19 17:02:16.825: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-7799 exec execpod-affinityqgjrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.64.33.93 80' +Oct 19 17:02:17.032: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 100.64.33.93 80\nConnection to 100.64.33.93 80 port [tcp/http] succeeded!\n" +Oct 19 17:02:17.032: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 19 17:02:17.040: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-7799 exec execpod-affinityqgjrj -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://100.64.33.93:80/ ; done' +Oct 19 17:02:17.305: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.33.93:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.33.93:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.33.93:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.33.93:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.33.93:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.33.93:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.33.93:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.33.93:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.33.93:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.33.93:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.33.93:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.33.93:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.33.93:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.33.93:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.33.93:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.33.93:80/\n" +Oct 19 17:02:17.305: INFO: stdout: "\naffinity-clusterip-transition-nrcdj\naffinity-clusterip-transition-nrcdj\naffinity-clusterip-transition-nrcdj\naffinity-clusterip-transition-nrcdj\naffinity-clusterip-transition-nrcdj\naffinity-clusterip-transition-nrcdj\naffinity-clusterip-transition-nrcdj\naffinity-clusterip-transition-nrcdj\naffinity-clusterip-transition-nrcdj\naffinity-clusterip-transition-nrcdj\naffinity-clusterip-transition-nrcdj\naffinity-clusterip-transition-nrcdj\naffinity-clusterip-transition-nrcdj\naffinity-clusterip-transition-nrcdj\naffinity-clusterip-transition-nrcdj\naffinity-clusterip-transition-nrcdj" +Oct 19 17:02:17.305: INFO: Received response from host: affinity-clusterip-transition-nrcdj +Oct 19 17:02:17.305: INFO: Received response from host: affinity-clusterip-transition-nrcdj +Oct 19 17:02:17.305: INFO: Received response from host: affinity-clusterip-transition-nrcdj +Oct 19 17:02:17.305: INFO: Received response from host: affinity-clusterip-transition-nrcdj +Oct 19 17:02:17.305: INFO: Received response from host: affinity-clusterip-transition-nrcdj +Oct 19 17:02:17.305: INFO: Received response from host: affinity-clusterip-transition-nrcdj +Oct 19 17:02:17.305: INFO: Received response from host: affinity-clusterip-transition-nrcdj +Oct 19 17:02:17.305: INFO: Received response from host: affinity-clusterip-transition-nrcdj +Oct 19 17:02:17.305: INFO: Received response from host: affinity-clusterip-transition-nrcdj +Oct 19 17:02:17.305: INFO: Received response from host: affinity-clusterip-transition-nrcdj +Oct 19 17:02:17.305: INFO: Received response from host: affinity-clusterip-transition-nrcdj +Oct 19 17:02:17.305: INFO: Received response from host: affinity-clusterip-transition-nrcdj +Oct 19 17:02:17.305: INFO: Received response from host: affinity-clusterip-transition-nrcdj +Oct 19 17:02:17.305: INFO: Received response from host: affinity-clusterip-transition-nrcdj +Oct 19 17:02:17.305: INFO: Received response from host: affinity-clusterip-transition-nrcdj +Oct 19 17:02:17.305: INFO: Received response from host: affinity-clusterip-transition-nrcdj +Oct 19 17:02:47.305: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-7799 exec execpod-affinityqgjrj -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://100.64.33.93:80/ ; done' +Oct 19 17:02:47.638: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.33.93:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.33.93:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.33.93:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.33.93:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.33.93:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.33.93:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.33.93:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.33.93:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.33.93:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.33.93:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.33.93:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.33.93:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.33.93:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.33.93:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.33.93:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.33.93:80/\n" +Oct 19 17:02:47.639: INFO: stdout: "\naffinity-clusterip-transition-nrcdj\naffinity-clusterip-transition-nrcdj\naffinity-clusterip-transition-ndbfk\naffinity-clusterip-transition-nrcdj\naffinity-clusterip-transition-2djbw\naffinity-clusterip-transition-nrcdj\naffinity-clusterip-transition-2djbw\naffinity-clusterip-transition-2djbw\naffinity-clusterip-transition-nrcdj\naffinity-clusterip-transition-2djbw\naffinity-clusterip-transition-2djbw\naffinity-clusterip-transition-nrcdj\naffinity-clusterip-transition-2djbw\naffinity-clusterip-transition-nrcdj\naffinity-clusterip-transition-2djbw\naffinity-clusterip-transition-ndbfk" +Oct 19 17:02:47.639: INFO: Received response from host: affinity-clusterip-transition-nrcdj +Oct 19 17:02:47.639: INFO: Received response from host: affinity-clusterip-transition-nrcdj +Oct 19 17:02:47.639: INFO: Received response from host: affinity-clusterip-transition-ndbfk +Oct 19 17:02:47.639: INFO: Received response from host: affinity-clusterip-transition-nrcdj +Oct 19 17:02:47.639: INFO: Received response from host: affinity-clusterip-transition-2djbw +Oct 19 17:02:47.639: INFO: Received response from host: affinity-clusterip-transition-nrcdj +Oct 19 17:02:47.639: INFO: Received response from host: affinity-clusterip-transition-2djbw +Oct 19 17:02:47.639: INFO: Received response from host: affinity-clusterip-transition-2djbw +Oct 19 17:02:47.639: INFO: Received response from host: affinity-clusterip-transition-nrcdj +Oct 19 17:02:47.639: INFO: Received response from host: affinity-clusterip-transition-2djbw +Oct 19 17:02:47.639: INFO: Received response from host: affinity-clusterip-transition-2djbw +Oct 19 17:02:47.639: INFO: Received response from host: affinity-clusterip-transition-nrcdj +Oct 19 17:02:47.639: INFO: Received response from host: affinity-clusterip-transition-2djbw +Oct 19 17:02:47.639: INFO: Received response from host: affinity-clusterip-transition-nrcdj +Oct 19 17:02:47.639: INFO: Received response from host: affinity-clusterip-transition-2djbw +Oct 19 17:02:47.639: INFO: Received response from host: affinity-clusterip-transition-ndbfk +Oct 19 17:02:47.649: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-7799 exec execpod-affinityqgjrj -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://100.64.33.93:80/ ; done' +Oct 19 17:02:48.002: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.33.93:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.33.93:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.33.93:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.33.93:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.33.93:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.33.93:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.33.93:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.33.93:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.33.93:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.33.93:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.33.93:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.33.93:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.33.93:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.33.93:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.33.93:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.33.93:80/\n" +Oct 19 17:02:48.002: INFO: stdout: "\naffinity-clusterip-transition-ndbfk\naffinity-clusterip-transition-ndbfk\naffinity-clusterip-transition-ndbfk\naffinity-clusterip-transition-ndbfk\naffinity-clusterip-transition-ndbfk\naffinity-clusterip-transition-ndbfk\naffinity-clusterip-transition-ndbfk\naffinity-clusterip-transition-ndbfk\naffinity-clusterip-transition-ndbfk\naffinity-clusterip-transition-ndbfk\naffinity-clusterip-transition-ndbfk\naffinity-clusterip-transition-ndbfk\naffinity-clusterip-transition-ndbfk\naffinity-clusterip-transition-ndbfk\naffinity-clusterip-transition-ndbfk\naffinity-clusterip-transition-ndbfk" +Oct 19 17:02:48.002: INFO: Received response from host: affinity-clusterip-transition-ndbfk +Oct 19 17:02:48.002: INFO: Received response from host: affinity-clusterip-transition-ndbfk +Oct 19 17:02:48.002: INFO: Received response from host: affinity-clusterip-transition-ndbfk +Oct 19 17:02:48.002: INFO: Received response from host: affinity-clusterip-transition-ndbfk +Oct 19 17:02:48.002: INFO: Received response from host: affinity-clusterip-transition-ndbfk +Oct 19 17:02:48.002: INFO: Received response from host: affinity-clusterip-transition-ndbfk +Oct 19 17:02:48.002: INFO: Received response from host: affinity-clusterip-transition-ndbfk +Oct 19 17:02:48.002: INFO: Received response from host: affinity-clusterip-transition-ndbfk +Oct 19 17:02:48.002: INFO: Received response from host: affinity-clusterip-transition-ndbfk +Oct 19 17:02:48.002: INFO: Received response from host: affinity-clusterip-transition-ndbfk +Oct 19 17:02:48.002: INFO: Received response from host: affinity-clusterip-transition-ndbfk +Oct 19 17:02:48.002: INFO: Received response from host: affinity-clusterip-transition-ndbfk +Oct 19 17:02:48.002: INFO: Received response from host: affinity-clusterip-transition-ndbfk +Oct 19 17:02:48.002: INFO: Received response from host: affinity-clusterip-transition-ndbfk +Oct 19 17:02:48.002: INFO: Received response from host: affinity-clusterip-transition-ndbfk +Oct 19 17:02:48.002: INFO: Received response from host: affinity-clusterip-transition-ndbfk +Oct 19 17:02:48.002: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-7799, will wait for the garbage collector to delete the pods +Oct 19 17:02:48.066: INFO: Deleting ReplicationController affinity-clusterip-transition took: 4.046817ms +Oct 19 17:02:48.167: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 100.858816ms +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:02:50.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-7799" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":346,"completed":241,"skipped":4468,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Kubelet when scheduling a busybox command that always fails in a pod + should have an terminated reason [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:02:50.285: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubelet-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubelet-test-4861 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 +[BeforeEach] when scheduling a busybox command that always fails in a pod + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:82 +[It] should have an terminated reason [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:02:54.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubelet-test-4861" for this suite. +•{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":346,"completed":242,"skipped":4509,"failed":0} +SSSSSS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with secret pod [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:02:54.443: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename subpath +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in subpath-5079 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] Atomic writer volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 +STEP: Setting up data +[It] should support subpaths with secret pod [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating pod pod-subpath-test-secret-tfg2 +STEP: Creating a pod to test atomic-volume-subpath +Oct 19 17:02:54.592: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-tfg2" in namespace "subpath-5079" to be "Succeeded or Failed" +Oct 19 17:02:54.596: INFO: Pod "pod-subpath-test-secret-tfg2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.338844ms +Oct 19 17:02:56.600: INFO: Pod "pod-subpath-test-secret-tfg2": Phase="Running", Reason="", readiness=true. Elapsed: 2.007707817s +Oct 19 17:02:58.604: INFO: Pod "pod-subpath-test-secret-tfg2": Phase="Running", Reason="", readiness=true. Elapsed: 4.012090015s +Oct 19 17:03:00.608: INFO: Pod "pod-subpath-test-secret-tfg2": Phase="Running", Reason="", readiness=true. Elapsed: 6.016079196s +Oct 19 17:03:02.621: INFO: Pod "pod-subpath-test-secret-tfg2": Phase="Running", Reason="", readiness=true. Elapsed: 8.029161408s +Oct 19 17:03:04.625: INFO: Pod "pod-subpath-test-secret-tfg2": Phase="Running", Reason="", readiness=true. Elapsed: 10.033063146s +Oct 19 17:03:06.629: INFO: Pod "pod-subpath-test-secret-tfg2": Phase="Running", Reason="", readiness=true. Elapsed: 12.036924854s +Oct 19 17:03:08.634: INFO: Pod "pod-subpath-test-secret-tfg2": Phase="Running", Reason="", readiness=true. Elapsed: 14.041504717s +Oct 19 17:03:10.638: INFO: Pod "pod-subpath-test-secret-tfg2": Phase="Running", Reason="", readiness=true. Elapsed: 16.045968549s +Oct 19 17:03:12.642: INFO: Pod "pod-subpath-test-secret-tfg2": Phase="Running", Reason="", readiness=true. Elapsed: 18.050160679s +Oct 19 17:03:14.647: INFO: Pod "pod-subpath-test-secret-tfg2": Phase="Running", Reason="", readiness=true. Elapsed: 20.055239163s +Oct 19 17:03:16.652: INFO: Pod "pod-subpath-test-secret-tfg2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.06015939s +STEP: Saw pod success +Oct 19 17:03:16.652: INFO: Pod "pod-subpath-test-secret-tfg2" satisfied condition "Succeeded or Failed" +Oct 19 17:03:16.656: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod pod-subpath-test-secret-tfg2 container test-container-subpath-secret-tfg2: +STEP: delete the pod +Oct 19 17:03:16.671: INFO: Waiting for pod pod-subpath-test-secret-tfg2 to disappear +Oct 19 17:03:16.673: INFO: Pod pod-subpath-test-secret-tfg2 no longer exists +STEP: Deleting pod pod-subpath-test-secret-tfg2 +Oct 19 17:03:16.673: INFO: Deleting pod "pod-subpath-test-secret-tfg2" in namespace "subpath-5079" +[AfterEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:03:16.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "subpath-5079" for this suite. +•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":346,"completed":243,"skipped":4515,"failed":0} + +------------------------------ +[sig-network] Services + should be able to change the type from NodePort to ExternalName [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:03:16.685: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-8550 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should be able to change the type from NodePort to ExternalName [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a service nodeport-service with the type=NodePort in namespace services-8550 +STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service +STEP: creating service externalsvc in namespace services-8550 +STEP: creating replication controller externalsvc in namespace services-8550 +I1019 17:03:16.908490 4339 runners.go:190] Created replication controller with name: externalsvc, namespace: services-8550, replica count: 2 +I1019 17:03:19.960864 4339 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +STEP: changing the NodePort service to type=ExternalName +Oct 19 17:03:19.976: INFO: Creating new exec pod +Oct 19 17:03:21.990: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-8550 exec execpod28dgn -- /bin/sh -x -c nslookup nodeport-service.services-8550.svc.cluster.local' +Oct 19 17:03:22.269: INFO: stderr: "+ nslookup nodeport-service.services-8550.svc.cluster.local\n" +Oct 19 17:03:22.269: INFO: stdout: "Server:\t\t100.64.0.10\nAddress:\t100.64.0.10#53\n\nnodeport-service.services-8550.svc.cluster.local\tcanonical name = externalsvc.services-8550.svc.cluster.local.\nName:\texternalsvc.services-8550.svc.cluster.local\nAddress: 100.70.132.29\n\n" +STEP: deleting ReplicationController externalsvc in namespace services-8550, will wait for the garbage collector to delete the pods +Oct 19 17:03:22.326: INFO: Deleting ReplicationController externalsvc took: 3.804356ms +Oct 19 17:03:22.427: INFO: Terminating ReplicationController externalsvc pods took: 100.296342ms +Oct 19 17:03:24.335: INFO: Cleaning up the NodePort to ExternalName test service +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:03:24.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-8550" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":346,"completed":244,"skipped":4515,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a secret. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:03:24.350: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename resourcequota +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-33 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should create a ResourceQuota and capture the life of a secret. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Discovering how many secrets are in namespace by default +STEP: Counting existing ResourceQuota +STEP: Creating a ResourceQuota +STEP: Ensuring resource quota status is calculated +STEP: Creating a Secret +STEP: Ensuring resource quota status captures secret creation +STEP: Deleting a secret +STEP: Ensuring resource quota status released usage +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:03:41.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-33" for this suite. +•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":346,"completed":245,"skipped":4529,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-apps] DisruptionController + should block an eviction until the PDB is updated to allow it [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:03:41.550: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename disruption +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in disruption-7526 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 +[It] should block an eviction until the PDB is updated to allow it [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pdb that targets all three pods in a test replica set +STEP: Waiting for the pdb to be processed +STEP: First trying to evict a pod which shouldn't be evictable +STEP: Waiting for all pods to be running +Oct 19 17:03:41.701: INFO: pods: 0 < 3 +STEP: locating a running pod +STEP: Updating the pdb to allow a pod to be evicted +STEP: Waiting for the pdb to be processed +STEP: Trying to evict the same pod we tried earlier which should now be evictable +STEP: Waiting for all pods to be running +STEP: Waiting for the pdb to observed all healthy pods +STEP: Patching the pdb to disallow a pod to be evicted +STEP: Waiting for the pdb to be processed +STEP: Waiting for all pods to be running +Oct 19 17:03:43.773: INFO: running pods: 2 < 3 +STEP: locating a running pod +STEP: Deleting the pdb to allow a pod to be evicted +STEP: Waiting for the pdb to be deleted +STEP: Trying to evict the same pod we tried earlier which should now be evictable +STEP: Waiting for all pods to be running +[AfterEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:03:45.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "disruption-7526" for this suite. +•{"msg":"PASSED [sig-apps] DisruptionController should block an eviction until the PDB is updated to allow it [Conformance]","total":346,"completed":246,"skipped":4542,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should mutate configmap [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:03:45.811: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-9285 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 19 17:03:46.298: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 19 17:03:49.317: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should mutate configmap [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Registering the mutating configmap webhook via the AdmissionRegistration API +STEP: create a configmap that should be updated by the webhook +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:03:49.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-9285" for this suite. +STEP: Destroying namespace "webhook-9285-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":346,"completed":247,"skipped":4579,"failed":0} +SSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + patching/updating a mutating webhook should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:03:49.559: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-8724 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 19 17:03:50.141: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 19 17:03:53.160: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] patching/updating a mutating webhook should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a mutating webhook configuration +STEP: Updating a mutating webhook configuration's rules to not include the create operation +STEP: Creating a configMap that should not be mutated +STEP: Patching a mutating webhook configuration's rules to include the create operation +STEP: Creating a configMap that should be mutated +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:03:53.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-8724" for this suite. +STEP: Destroying namespace "webhook-8724-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":346,"completed":248,"skipped":4588,"failed":0} +SSSS +------------------------------ +[sig-storage] EmptyDir volumes + volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:03:53.368: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-5392 +STEP: Waiting for a default service account to be provisioned in namespace +[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir volume type on tmpfs +Oct 19 17:03:53.515: INFO: Waiting up to 5m0s for pod "pod-a3766eb1-aede-4975-8be8-4aea311de008" in namespace "emptydir-5392" to be "Succeeded or Failed" +Oct 19 17:03:53.519: INFO: Pod "pod-a3766eb1-aede-4975-8be8-4aea311de008": Phase="Pending", Reason="", readiness=false. Elapsed: 3.499667ms +Oct 19 17:03:55.523: INFO: Pod "pod-a3766eb1-aede-4975-8be8-4aea311de008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00764008s +STEP: Saw pod success +Oct 19 17:03:55.523: INFO: Pod "pod-a3766eb1-aede-4975-8be8-4aea311de008" satisfied condition "Succeeded or Failed" +Oct 19 17:03:55.526: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod pod-a3766eb1-aede-4975-8be8-4aea311de008 container test-container: +STEP: delete the pod +Oct 19 17:03:55.540: INFO: Waiting for pod pod-a3766eb1-aede-4975-8be8-4aea311de008 to disappear +Oct 19 17:03:55.543: INFO: Pod pod-a3766eb1-aede-4975-8be8-4aea311de008 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:03:55.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-5392" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":249,"skipped":4592,"failed":0} +SSSSSSSS +------------------------------ +[sig-network] DNS + should provide DNS for pods for Subdomain [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:03:55.552: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename dns +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-9961 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide DNS for pods for Subdomain [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a test headless service +STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-9961.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-9961.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-9961.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-9961.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9961.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-9961.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-9961.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-9961.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-9961.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9961.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done + +STEP: creating a pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Oct 19 17:03:57.843: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07: the server could not find the requested resource (get pods dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07) +Oct 19 17:03:57.850: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07: the server could not find the requested resource (get pods dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07) +Oct 19 17:03:57.857: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07: the server could not find the requested resource (get pods dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07) +Oct 19 17:03:57.903: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07: the server could not find the requested resource (get pods dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07) +Oct 19 17:03:57.919: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07: the server could not find the requested resource (get pods dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07) +Oct 19 17:03:57.925: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07: the server could not find the requested resource (get pods dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07) +Oct 19 17:03:57.930: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07: the server could not find the requested resource (get pods dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07) +Oct 19 17:03:57.936: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07: the server could not find the requested resource (get pods dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07) +Oct 19 17:03:57.946: INFO: Lookups using dns-9961/dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9961.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9961.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local jessie_udp@dns-test-service-2.dns-9961.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9961.svc.cluster.local] + +Oct 19 17:04:02.954: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07: the server could not find the requested resource (get pods dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07) +Oct 19 17:04:02.961: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07: the server could not find the requested resource (get pods dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07) +Oct 19 17:04:02.975: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07: the server could not find the requested resource (get pods dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07) +Oct 19 17:04:03.019: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07: the server could not find the requested resource (get pods dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07) +Oct 19 17:04:03.035: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07: the server could not find the requested resource (get pods dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07) +Oct 19 17:04:03.040: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07: the server could not find the requested resource (get pods dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07) +Oct 19 17:04:03.045: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07: the server could not find the requested resource (get pods dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07) +Oct 19 17:04:03.050: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07: the server could not find the requested resource (get pods dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07) +Oct 19 17:04:03.061: INFO: Lookups using dns-9961/dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9961.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9961.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local jessie_udp@dns-test-service-2.dns-9961.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9961.svc.cluster.local] + +Oct 19 17:04:07.954: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07: the server could not find the requested resource (get pods dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07) +Oct 19 17:04:07.960: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07: the server could not find the requested resource (get pods dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07) +Oct 19 17:04:08.005: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07: the server could not find the requested resource (get pods dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07) +Oct 19 17:04:08.013: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07: the server could not find the requested resource (get pods dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07) +Oct 19 17:04:08.031: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07: the server could not find the requested resource (get pods dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07) +Oct 19 17:04:08.037: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07: the server could not find the requested resource (get pods dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07) +Oct 19 17:04:08.042: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07: the server could not find the requested resource (get pods dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07) +Oct 19 17:04:08.049: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07: the server could not find the requested resource (get pods dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07) +Oct 19 17:04:08.061: INFO: Lookups using dns-9961/dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9961.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9961.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local jessie_udp@dns-test-service-2.dns-9961.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9961.svc.cluster.local] + +Oct 19 17:04:12.957: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07: the server could not find the requested resource (get pods dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07) +Oct 19 17:04:12.962: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07: the server could not find the requested resource (get pods dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07) +Oct 19 17:04:12.968: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07: the server could not find the requested resource (get pods dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07) +Oct 19 17:04:13.011: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07: the server could not find the requested resource (get pods dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07) +Oct 19 17:04:13.039: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07: the server could not find the requested resource (get pods dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07) +Oct 19 17:04:13.044: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07: the server could not find the requested resource (get pods dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07) +Oct 19 17:04:13.049: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07: the server could not find the requested resource (get pods dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07) +Oct 19 17:04:13.054: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07: the server could not find the requested resource (get pods dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07) +Oct 19 17:04:13.064: INFO: Lookups using dns-9961/dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9961.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9961.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local jessie_udp@dns-test-service-2.dns-9961.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9961.svc.cluster.local] + +Oct 19 17:04:17.953: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07: the server could not find the requested resource (get pods dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07) +Oct 19 17:04:17.958: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07: the server could not find the requested resource (get pods dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07) +Oct 19 17:04:18.003: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07: the server could not find the requested resource (get pods dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07) +Oct 19 17:04:18.009: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07: the server could not find the requested resource (get pods dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07) +Oct 19 17:04:18.025: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07: the server could not find the requested resource (get pods dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07) +Oct 19 17:04:18.030: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07: the server could not find the requested resource (get pods dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07) +Oct 19 17:04:18.035: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07: the server could not find the requested resource (get pods dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07) +Oct 19 17:04:18.041: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07: the server could not find the requested resource (get pods dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07) +Oct 19 17:04:18.052: INFO: Lookups using dns-9961/dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9961.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9961.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local jessie_udp@dns-test-service-2.dns-9961.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9961.svc.cluster.local] + +Oct 19 17:04:22.953: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07: the server could not find the requested resource (get pods dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07) +Oct 19 17:04:22.999: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07: the server could not find the requested resource (get pods dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07) +Oct 19 17:04:23.005: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07: the server could not find the requested resource (get pods dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07) +Oct 19 17:04:23.010: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07: the server could not find the requested resource (get pods dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07) +Oct 19 17:04:23.064: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07: the server could not find the requested resource (get pods dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07) +Oct 19 17:04:23.077: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07: the server could not find the requested resource (get pods dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07) +Oct 19 17:04:23.084: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07: the server could not find the requested resource (get pods dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07) +Oct 19 17:04:23.089: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07: the server could not find the requested resource (get pods dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07) +Oct 19 17:04:23.100: INFO: Lookups using dns-9961/dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9961.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9961.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local jessie_udp@dns-test-service-2.dns-9961.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9961.svc.cluster.local] + +Oct 19 17:04:28.062: INFO: DNS probes using dns-9961/dns-test-284f68e9-5176-48c3-9521-871a8c1f0d07 succeeded + +STEP: deleting the pod +STEP: deleting the test headless service +[AfterEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:04:28.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-9961" for this suite. +•{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":346,"completed":250,"skipped":4600,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-api-machinery] Aggregator + Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Aggregator + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:04:28.101: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename aggregator +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in aggregator-3460 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] Aggregator + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:77 +Oct 19 17:04:28.245: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +[It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Registering the sample API server. +Oct 19 17:04:28.552: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set +Oct 19 17:04:30.596: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770259868, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770259868, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770259868, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770259868, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 19 17:04:32.600: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770259868, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770259868, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770259868, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770259868, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 19 17:04:34.601: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770259868, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770259868, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770259868, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770259868, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 19 17:04:37.723: INFO: Waited 1.117903342s for the sample-apiserver to be ready to handle requests. +STEP: Read Status for v1alpha1.wardle.example.com +STEP: kubectl patch apiservice v1alpha1.wardle.example.com -p '{"spec":{"versionPriority": 400}}' +STEP: List APIServices +Oct 19 17:04:37.971: INFO: Found v1alpha1.wardle.example.com in APIServiceList +[AfterEach] [sig-api-machinery] Aggregator + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:68 +[AfterEach] [sig-api-machinery] Aggregator + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:04:38.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "aggregator-3460" for this suite. +•{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":346,"completed":251,"skipped":4610,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Docker Containers + should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Docker Containers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:04:38.428: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename containers +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in containers-2757 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test override command +Oct 19 17:04:38.587: INFO: Waiting up to 5m0s for pod "client-containers-71b8294a-9a72-40d8-80d8-3d4c0ebb8556" in namespace "containers-2757" to be "Succeeded or Failed" +Oct 19 17:04:38.594: INFO: Pod "client-containers-71b8294a-9a72-40d8-80d8-3d4c0ebb8556": Phase="Pending", Reason="", readiness=false. Elapsed: 6.279103ms +Oct 19 17:04:40.601: INFO: Pod "client-containers-71b8294a-9a72-40d8-80d8-3d4c0ebb8556": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013296572s +STEP: Saw pod success +Oct 19 17:04:40.601: INFO: Pod "client-containers-71b8294a-9a72-40d8-80d8-3d4c0ebb8556" satisfied condition "Succeeded or Failed" +Oct 19 17:04:40.609: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod client-containers-71b8294a-9a72-40d8-80d8-3d4c0ebb8556 container agnhost-container: +STEP: delete the pod +Oct 19 17:04:40.626: INFO: Waiting for pod client-containers-71b8294a-9a72-40d8-80d8-3d4c0ebb8556 to disappear +Oct 19 17:04:40.629: INFO: Pod client-containers-71b8294a-9a72-40d8-80d8-3d4c0ebb8556 no longer exists +[AfterEach] [sig-node] Docker Containers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:04:40.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "containers-2757" for this suite. +•{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":346,"completed":252,"skipped":4646,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-node] Variable Expansion + should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:04:40.638: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename var-expansion +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in var-expansion-2111 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 19 17:04:42.789: INFO: Deleting pod "var-expansion-f4d4f511-2028-44cf-b306-4427df63b09d" in namespace "var-expansion-2111" +Oct 19 17:04:42.794: INFO: Wait up to 5m0s for pod "var-expansion-f4d4f511-2028-44cf-b306-4427df63b09d" to be fully deleted +[AfterEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:04:44.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-2111" for this suite. +•{"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]","total":346,"completed":253,"skipped":4658,"failed":0} +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Variable Expansion + should allow composing env vars into new env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:04:44.811: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename var-expansion +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in var-expansion-5476 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should allow composing env vars into new env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test env composition +Oct 19 17:04:44.955: INFO: Waiting up to 5m0s for pod "var-expansion-e9cedac1-966f-470f-96d5-7434589cb198" in namespace "var-expansion-5476" to be "Succeeded or Failed" +Oct 19 17:04:44.958: INFO: Pod "var-expansion-e9cedac1-966f-470f-96d5-7434589cb198": Phase="Pending", Reason="", readiness=false. Elapsed: 3.675492ms +Oct 19 17:04:46.963: INFO: Pod "var-expansion-e9cedac1-966f-470f-96d5-7434589cb198": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007814365s +STEP: Saw pod success +Oct 19 17:04:46.963: INFO: Pod "var-expansion-e9cedac1-966f-470f-96d5-7434589cb198" satisfied condition "Succeeded or Failed" +Oct 19 17:04:46.966: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod var-expansion-e9cedac1-966f-470f-96d5-7434589cb198 container dapi-container: +STEP: delete the pod +Oct 19 17:04:46.988: INFO: Waiting for pod var-expansion-e9cedac1-966f-470f-96d5-7434589cb198 to disappear +Oct 19 17:04:46.994: INFO: Pod var-expansion-e9cedac1-966f-470f-96d5-7434589cb198 no longer exists +[AfterEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:04:46.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-5476" for this suite. +•{"msg":"PASSED [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":346,"completed":254,"skipped":4678,"failed":0} +SSSSSS +------------------------------ +[sig-auth] ServiceAccounts + ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:04:47.003: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename svcaccounts +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in svcaccounts-8359 +STEP: Waiting for a default service account to be provisioned in namespace +[It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 19 17:04:47.159: INFO: created pod +Oct 19 17:04:47.159: INFO: Waiting up to 5m0s for pod "oidc-discovery-validator" in namespace "svcaccounts-8359" to be "Succeeded or Failed" +Oct 19 17:04:47.165: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 6.365649ms +Oct 19 17:04:49.170: INFO: Pod "oidc-discovery-validator": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011740196s +STEP: Saw pod success +Oct 19 17:04:49.170: INFO: Pod "oidc-discovery-validator" satisfied condition "Succeeded or Failed" +Oct 19 17:05:19.172: INFO: polling logs +Oct 19 17:05:19.181: INFO: Pod logs: +2021/10/19 17:04:47 OK: Got token +2021/10/19 17:04:47 validating with in-cluster discovery +2021/10/19 17:04:47 OK: got issuer https://api.tmhay-ddd.it.internal.staging.k8s.ondemand.com +2021/10/19 17:04:47 Full, not-validated claims: +openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://api.tmhay-ddd.it.internal.staging.k8s.ondemand.com", Subject:"system:serviceaccount:svcaccounts-8359:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1634663687, NotBefore:1634663087, IssuedAt:1634663087, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-8359", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"0812ad02-1af8-4219-a4d8-4c4bd1b9610a"}}} +2021/10/19 17:04:47 OK: Constructed OIDC provider for issuer https://api.tmhay-ddd.it.internal.staging.k8s.ondemand.com +2021/10/19 17:04:47 OK: Validated signature on JWT +2021/10/19 17:04:47 OK: Got valid claims from token! +2021/10/19 17:04:47 Full, validated claims: +&openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://api.tmhay-ddd.it.internal.staging.k8s.ondemand.com", Subject:"system:serviceaccount:svcaccounts-8359:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1634663687, NotBefore:1634663087, IssuedAt:1634663087, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-8359", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"0812ad02-1af8-4219-a4d8-4c4bd1b9610a"}}} + +Oct 19 17:05:19.181: INFO: completed pod +[AfterEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:05:19.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svcaccounts-8359" for this suite. +•{"msg":"PASSED [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","total":346,"completed":255,"skipped":4684,"failed":0} +SSSSS +------------------------------ +[sig-network] DNS + should support configurable pod DNS nameservers [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:05:19.195: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename dns +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-5474 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support configurable pod DNS nameservers [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... +Oct 19 17:05:19.364: INFO: Created pod &Pod{ObjectMeta:{test-dns-nameservers dns-5474 a141ec39-2cfc-42b9-a192-9e3d861916ad 34233 0 2021-10-19 17:05:19 +0000 UTC map[] map[kubernetes.io/psp:e2e-test-privileged-psp] [] [] [{e2e.test Update v1 2021-10-19 17:05:19 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-656kk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmhay-ddd.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-656kk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 19 17:05:19.367: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) +Oct 19 17:05:21.372: INFO: The status of Pod test-dns-nameservers is Running (Ready = true) +STEP: Verifying customized DNS suffix list is configured on pod... +Oct 19 17:05:21.372: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-5474 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 19 17:05:21.372: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Verifying customized DNS server is configured on pod... +Oct 19 17:05:21.580: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-5474 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 19 17:05:21.580: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 19 17:05:21.830: INFO: Deleting pod test-dns-nameservers... +[AfterEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:05:21.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-5474" for this suite. +•{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":346,"completed":256,"skipped":4689,"failed":0} +SSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:05:21.845: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-1833 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name configmap-test-volume-map-4073ded6-43b1-453d-a45f-bf1f1d48b611 +STEP: Creating a pod to test consume configMaps +Oct 19 17:05:22.000: INFO: Waiting up to 5m0s for pod "pod-configmaps-84721dc5-74a3-4c96-a5fa-344c8eb9c2bb" in namespace "configmap-1833" to be "Succeeded or Failed" +Oct 19 17:05:22.004: INFO: Pod "pod-configmaps-84721dc5-74a3-4c96-a5fa-344c8eb9c2bb": Phase="Pending", Reason="", readiness=false. Elapsed: 3.51002ms +Oct 19 17:05:24.008: INFO: Pod "pod-configmaps-84721dc5-74a3-4c96-a5fa-344c8eb9c2bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008086424s +STEP: Saw pod success +Oct 19 17:05:24.008: INFO: Pod "pod-configmaps-84721dc5-74a3-4c96-a5fa-344c8eb9c2bb" satisfied condition "Succeeded or Failed" +Oct 19 17:05:24.012: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod pod-configmaps-84721dc5-74a3-4c96-a5fa-344c8eb9c2bb container agnhost-container: +STEP: delete the pod +Oct 19 17:05:24.026: INFO: Waiting for pod pod-configmaps-84721dc5-74a3-4c96-a5fa-344c8eb9c2bb to disappear +Oct 19 17:05:24.029: INFO: Pod pod-configmaps-84721dc5-74a3-4c96-a5fa-344c8eb9c2bb no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:05:24.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-1833" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":346,"completed":257,"skipped":4698,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:05:24.038: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-4173 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 19 17:05:24.180: INFO: Waiting up to 5m0s for pod "downwardapi-volume-15c85fab-e2cf-4005-a028-b38ee0c110b4" in namespace "downward-api-4173" to be "Succeeded or Failed" +Oct 19 17:05:24.183: INFO: Pod "downwardapi-volume-15c85fab-e2cf-4005-a028-b38ee0c110b4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.785268ms +Oct 19 17:05:26.188: INFO: Pod "downwardapi-volume-15c85fab-e2cf-4005-a028-b38ee0c110b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007493074s +STEP: Saw pod success +Oct 19 17:05:26.188: INFO: Pod "downwardapi-volume-15c85fab-e2cf-4005-a028-b38ee0c110b4" satisfied condition "Succeeded or Failed" +Oct 19 17:05:26.191: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod downwardapi-volume-15c85fab-e2cf-4005-a028-b38ee0c110b4 container client-container: +STEP: delete the pod +Oct 19 17:05:26.206: INFO: Waiting for pod downwardapi-volume-15c85fab-e2cf-4005-a028-b38ee0c110b4 to disappear +Oct 19 17:05:26.209: INFO: Pod downwardapi-volume-15c85fab-e2cf-4005-a028-b38ee0c110b4 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:05:26.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-4173" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":258,"skipped":4722,"failed":0} +SS +------------------------------ +[sig-network] Services + should be able to create a functioning NodePort service [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:05:26.217: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-2545 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should be able to create a functioning NodePort service [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating service nodeport-test with type=NodePort in namespace services-2545 +STEP: creating replication controller nodeport-test in namespace services-2545 +I1019 17:05:26.362977 4339 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-2545, replica count: 2 +I1019 17:05:29.414482 4339 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Oct 19 17:05:29.414: INFO: Creating new exec pod +Oct 19 17:05:32.438: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-2545 exec execpodrtpd7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' +Oct 19 17:05:32.724: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n" +Oct 19 17:05:32.724: INFO: stdout: "" +Oct 19 17:05:33.724: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-2545 exec execpodrtpd7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' +Oct 19 17:05:33.939: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n" +Oct 19 17:05:33.939: INFO: stdout: "nodeport-test-268wp" +Oct 19 17:05:33.940: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-2545 exec execpodrtpd7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.68.186.103 80' +Oct 19 17:05:34.149: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 100.68.186.103 80\nConnection to 100.68.186.103 80 port [tcp/http] succeeded!\n" +Oct 19 17:05:34.149: INFO: stdout: "nodeport-test-nvm88" +Oct 19 17:05:34.149: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-2545 exec execpodrtpd7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.250.1.123 31958' +Oct 19 17:05:34.359: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.250.1.123 31958\nConnection to 10.250.1.123 31958 port [tcp/*] succeeded!\n" +Oct 19 17:05:34.359: INFO: stdout: "" +Oct 19 17:05:35.359: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-2545 exec execpodrtpd7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.250.1.123 31958' +Oct 19 17:05:35.588: INFO: stderr: "+ nc -v -t -w 2 10.250.1.123 31958\n+ echo hostName\nConnection to 10.250.1.123 31958 port [tcp/*] succeeded!\n" +Oct 19 17:05:35.588: INFO: stdout: "nodeport-test-268wp" +Oct 19 17:05:35.588: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-2545 exec execpodrtpd7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.250.3.120 31958' +Oct 19 17:05:35.869: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.250.3.120 31958\nConnection to 10.250.3.120 31958 port [tcp/*] succeeded!\n" +Oct 19 17:05:35.869: INFO: stdout: "nodeport-test-268wp" +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:05:35.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-2545" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":346,"completed":259,"skipped":4724,"failed":0} +SS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + listing mutating webhooks should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:05:35.878: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-3878 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 19 17:05:36.399: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 19 17:05:39.419: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] listing mutating webhooks should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Listing all of the created validation webhooks +STEP: Creating a configMap that should be mutated +STEP: Deleting the collection of validation webhooks +STEP: Creating a configMap that should not be mutated +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:05:39.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-3878" for this suite. +STEP: Destroying namespace "webhook-3878-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":346,"completed":260,"skipped":4726,"failed":0} +SS +------------------------------ +[sig-scheduling] SchedulerPredicates [Serial] + validates that NodeSelector is respected if matching [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:05:39.986: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename sched-pred +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-pred-8632 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 +Oct 19 17:05:40.120: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready +Oct 19 17:05:40.128: INFO: Waiting for terminating namespaces to be deleted... +Oct 19 17:05:40.139: INFO: +Logging pods the apiserver thinks is on node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 before test +Oct 19 17:05:40.148: INFO: addons-nginx-ingress-controller-6ccd9d5d4d-87wtm from kube-system started at 2021-10-19 16:20:45 +0000 UTC (1 container statuses recorded) +Oct 19 17:05:40.148: INFO: Container nginx-ingress-controller ready: true, restart count 0 +Oct 19 17:05:40.148: INFO: apiserver-proxy-ftftt from kube-system started at 2021-10-19 15:45:29 +0000 UTC (2 container statuses recorded) +Oct 19 17:05:40.148: INFO: Container proxy ready: true, restart count 0 +Oct 19 17:05:40.148: INFO: Container sidecar ready: true, restart count 0 +Oct 19 17:05:40.148: INFO: blackbox-exporter-65c549b94c-c5pzd from kube-system started at 2021-10-19 15:51:26 +0000 UTC (1 container statuses recorded) +Oct 19 17:05:40.148: INFO: Container blackbox-exporter ready: true, restart count 0 +Oct 19 17:05:40.148: INFO: calico-kube-controllers-86c64d79ff-hmgq6 from kube-system started at 2021-10-19 15:45:29 +0000 UTC (1 container statuses recorded) +Oct 19 17:05:40.148: INFO: Container calico-kube-controllers ready: true, restart count 0 +Oct 19 17:05:40.148: INFO: calico-node-gkqll from kube-system started at 2021-10-19 15:46:29 +0000 UTC (1 container statuses recorded) +Oct 19 17:05:40.148: INFO: Container calico-node ready: true, restart count 0 +Oct 19 17:05:40.148: INFO: calico-typha-deploy-58b94ff46-kljnn from kube-system started at 2021-10-19 15:45:29 +0000 UTC (1 container statuses recorded) +Oct 19 17:05:40.148: INFO: Container calico-typha ready: true, restart count 0 +Oct 19 17:05:40.148: INFO: csi-driver-node-twl5g from kube-system started at 2021-10-19 15:45:29 +0000 UTC (3 container statuses recorded) +Oct 19 17:05:40.148: INFO: Container csi-driver ready: true, restart count 0 +Oct 19 17:05:40.148: INFO: Container csi-liveness-probe ready: true, restart count 0 +Oct 19 17:05:40.148: INFO: Container csi-node-driver-registrar ready: true, restart count 0 +Oct 19 17:05:40.148: INFO: kube-proxy-hgtmc from kube-system started at 2021-10-19 15:47:27 +0000 UTC (2 container statuses recorded) +Oct 19 17:05:40.148: INFO: Container conntrack-fix ready: true, restart count 0 +Oct 19 17:05:40.148: INFO: Container kube-proxy ready: true, restart count 0 +Oct 19 17:05:40.148: INFO: node-exporter-v9h4r from kube-system started at 2021-10-19 15:45:29 +0000 UTC (1 container statuses recorded) +Oct 19 17:05:40.148: INFO: Container node-exporter ready: true, restart count 0 +Oct 19 17:05:40.148: INFO: node-problem-detector-2s6bt from kube-system started at 2021-10-19 16:11:27 +0000 UTC (1 container statuses recorded) +Oct 19 17:05:40.148: INFO: Container node-problem-detector ready: true, restart count 0 +Oct 19 17:05:40.148: INFO: execpodrtpd7 from services-2545 started at 2021-10-19 17:05:29 +0000 UTC (1 container statuses recorded) +Oct 19 17:05:40.148: INFO: Container agnhost-container ready: true, restart count 0 +Oct 19 17:05:40.148: INFO: nodeport-test-nvm88 from services-2545 started at 2021-10-19 17:05:26 +0000 UTC (1 container statuses recorded) +Oct 19 17:05:40.148: INFO: Container nodeport-test ready: true, restart count 0 +Oct 19 17:05:40.148: INFO: +Logging pods the apiserver thinks is on node shoot--it--tmhay-ddd-worker-1-z1-67558-zh8gq before test +Oct 19 17:05:40.159: INFO: addons-nginx-ingress-nginx-ingress-k8s-backend-56d9d84c8c-ftj5w from kube-system started at 2021-10-19 15:45:49 +0000 UTC (1 container statuses recorded) +Oct 19 17:05:40.159: INFO: Container nginx-ingress-nginx-ingress-k8s-backend ready: true, restart count 0 +Oct 19 17:05:40.159: INFO: apiserver-proxy-r6qsz from kube-system started at 2021-10-19 15:45:29 +0000 UTC (2 container statuses recorded) +Oct 19 17:05:40.159: INFO: Container proxy ready: true, restart count 0 +Oct 19 17:05:40.159: INFO: Container sidecar ready: true, restart count 0 +Oct 19 17:05:40.159: INFO: calico-node-54s6z from kube-system started at 2021-10-19 15:46:29 +0000 UTC (1 container statuses recorded) +Oct 19 17:05:40.159: INFO: Container calico-node ready: true, restart count 0 +Oct 19 17:05:40.159: INFO: calico-node-vertical-autoscaler-785b5f968-w77tx from kube-system started at 2021-10-19 15:45:49 +0000 UTC (1 container statuses recorded) +Oct 19 17:05:40.159: INFO: Container autoscaler ready: true, restart count 0 +Oct 19 17:05:40.159: INFO: calico-typha-horizontal-autoscaler-5b58bb446c-bqq7q from kube-system started at 2021-10-19 15:45:49 +0000 UTC (1 container statuses recorded) +Oct 19 17:05:40.159: INFO: Container autoscaler ready: true, restart count 0 +Oct 19 17:05:40.159: INFO: calico-typha-vertical-autoscaler-5c9655cddd-w2d9c from kube-system started at 2021-10-19 15:45:49 +0000 UTC (1 container statuses recorded) +Oct 19 17:05:40.159: INFO: Container autoscaler ready: true, restart count 0 +Oct 19 17:05:40.159: INFO: coredns-9866fb499-7zgkw from kube-system started at 2021-10-19 15:45:49 +0000 UTC (1 container statuses recorded) +Oct 19 17:05:40.159: INFO: Container coredns ready: true, restart count 0 +Oct 19 17:05:40.159: INFO: coredns-9866fb499-kcm5k from kube-system started at 2021-10-19 15:45:49 +0000 UTC (1 container statuses recorded) +Oct 19 17:05:40.159: INFO: Container coredns ready: true, restart count 0 +Oct 19 17:05:40.159: INFO: csi-driver-node-ps5fs from kube-system started at 2021-10-19 15:45:29 +0000 UTC (3 container statuses recorded) +Oct 19 17:05:40.159: INFO: Container csi-driver ready: true, restart count 0 +Oct 19 17:05:40.159: INFO: Container csi-liveness-probe ready: true, restart count 0 +Oct 19 17:05:40.159: INFO: Container csi-node-driver-registrar ready: true, restart count 0 +Oct 19 17:05:40.159: INFO: kube-proxy-dpksr from kube-system started at 2021-10-19 15:47:27 +0000 UTC (2 container statuses recorded) +Oct 19 17:05:40.159: INFO: Container conntrack-fix ready: true, restart count 0 +Oct 19 17:05:40.159: INFO: Container kube-proxy ready: true, restart count 0 +Oct 19 17:05:40.159: INFO: metrics-server-7958497998-bdvjq from kube-system started at 2021-10-19 15:45:49 +0000 UTC (1 container statuses recorded) +Oct 19 17:05:40.159: INFO: Container metrics-server ready: true, restart count 0 +Oct 19 17:05:40.159: INFO: node-exporter-2xtzn from kube-system started at 2021-10-19 15:45:29 +0000 UTC (1 container statuses recorded) +Oct 19 17:05:40.159: INFO: Container node-exporter ready: true, restart count 0 +Oct 19 17:05:40.159: INFO: node-problem-detector-6n9vb from kube-system started at 2021-10-19 16:11:28 +0000 UTC (1 container statuses recorded) +Oct 19 17:05:40.159: INFO: Container node-problem-detector ready: true, restart count 0 +Oct 19 17:05:40.159: INFO: vpn-shoot-6cdd4985bc-w7qgp from kube-system started at 2021-10-19 15:45:49 +0000 UTC (1 container statuses recorded) +Oct 19 17:05:40.159: INFO: Container vpn-shoot ready: true, restart count 0 +Oct 19 17:05:40.159: INFO: dashboard-metrics-scraper-7ccbfc448f-htlbk from kubernetes-dashboard started at 2021-10-19 15:45:49 +0000 UTC (1 container statuses recorded) +Oct 19 17:05:40.159: INFO: Container dashboard-metrics-scraper ready: true, restart count 0 +Oct 19 17:05:40.159: INFO: kubernetes-dashboard-847f4ffdcd-6s4nf from kubernetes-dashboard started at 2021-10-19 15:45:49 +0000 UTC (1 container statuses recorded) +Oct 19 17:05:40.159: INFO: Container kubernetes-dashboard ready: true, restart count 2 +Oct 19 17:05:40.159: INFO: nodeport-test-268wp from services-2545 started at 2021-10-19 17:05:26 +0000 UTC (1 container statuses recorded) +Oct 19 17:05:40.159: INFO: Container nodeport-test ready: true, restart count 0 +[It] validates that NodeSelector is respected if matching [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Trying to launch a pod without a label to get a node which can launch it. +STEP: Explicitly delete pod here to free the resource it takes. +STEP: Trying to apply a random label on the found node. +STEP: verifying the node has the label kubernetes.io/e2e-bfb19d70-ce65-4e07-b41d-a3b8f015af16 42 +STEP: Trying to relaunch the pod, now with labels. +STEP: removing the label kubernetes.io/e2e-bfb19d70-ce65-4e07-b41d-a3b8f015af16 off the node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 +STEP: verifying the node doesn't have the label kubernetes.io/e2e-bfb19d70-ce65-4e07-b41d-a3b8f015af16 +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:05:44.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-pred-8632" for this suite. +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 +•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":346,"completed":261,"skipped":4728,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] ConfigMap + should fail to create ConfigMap with empty key [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:05:44.263: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-950 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should fail to create ConfigMap with empty key [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap that has name configmap-test-emptyKey-5b1f108e-2013-41c5-86f2-5435a2f6858e +[AfterEach] [sig-node] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:05:44.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-950" for this suite. +•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":346,"completed":262,"skipped":4753,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should update labels on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:05:44.415: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-2670 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should update labels on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating the pod +Oct 19 17:05:44.567: INFO: The status of Pod labelsupdatec4b09307-b190-4c94-a757-2e6d2cbe3ebd is Pending, waiting for it to be Running (with Ready = true) +Oct 19 17:05:46.572: INFO: The status of Pod labelsupdatec4b09307-b190-4c94-a757-2e6d2cbe3ebd is Running (Ready = true) +Oct 19 17:05:47.096: INFO: Successfully updated pod "labelsupdatec4b09307-b190-4c94-a757-2e6d2cbe3ebd" +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:05:51.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-2670" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":346,"completed":263,"skipped":4802,"failed":0} +SSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:05:51.145: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-9191 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0777 on tmpfs +Oct 19 17:05:51.289: INFO: Waiting up to 5m0s for pod "pod-42a75f01-7017-44f4-bab4-8754860c5dc5" in namespace "emptydir-9191" to be "Succeeded or Failed" +Oct 19 17:05:51.292: INFO: Pod "pod-42a75f01-7017-44f4-bab4-8754860c5dc5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.149218ms +Oct 19 17:05:53.296: INFO: Pod "pod-42a75f01-7017-44f4-bab4-8754860c5dc5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007129696s +STEP: Saw pod success +Oct 19 17:05:53.296: INFO: Pod "pod-42a75f01-7017-44f4-bab4-8754860c5dc5" satisfied condition "Succeeded or Failed" +Oct 19 17:05:53.299: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod pod-42a75f01-7017-44f4-bab4-8754860c5dc5 container test-container: +STEP: delete the pod +Oct 19 17:05:53.318: INFO: Waiting for pod pod-42a75f01-7017-44f4-bab4-8754860c5dc5 to disappear +Oct 19 17:05:53.321: INFO: Pod pod-42a75f01-7017-44f4-bab4-8754860c5dc5 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:05:53.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-9191" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":264,"skipped":4806,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-node] InitContainer [NodeConformance] + should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:05:53.330: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename init-container +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in init-container-1170 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 +[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pod +Oct 19 17:05:53.464: INFO: PodSpec: initContainers in spec.initContainers +[AfterEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:05:55.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "init-container-1170" for this suite. +•{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":346,"completed":265,"skipped":4822,"failed":0} +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] NoExecuteTaintManager Multiple Pods [Serial] + evicts pods with minTolerationSeconds [Disruptive] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:05:55.766: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename taint-multiple-pods +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in taint-multiple-pods-6004 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/taints.go:345 +Oct 19 17:05:55.899: INFO: Waiting up to 1m0s for all nodes to be ready +Oct 19 17:06:55.931: INFO: Waiting for terminating namespaces to be deleted... +[It] evicts pods with minTolerationSeconds [Disruptive] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 19 17:06:55.935: INFO: Starting informer... +STEP: Starting pods... +Oct 19 17:06:56.155: INFO: Pod1 is running on shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9. Tainting Node +Oct 19 17:06:58.393: INFO: Pod2 is running on shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9. Tainting Node +STEP: Trying to apply a taint on the Node +STEP: verifying the node has the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute +STEP: Waiting for Pod1 and Pod2 to be deleted +Oct 19 17:07:03.911: INFO: Noticed Pod "taint-eviction-b1" gets evicted. +Oct 19 17:07:23.949: INFO: Noticed Pod "taint-eviction-b2" gets evicted. +STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute +[AfterEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:07:23.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "taint-multiple-pods-6004" for this suite. +•{"msg":"PASSED [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]","total":346,"completed":266,"skipped":4843,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide container's cpu request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:07:23.971: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-7759 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should provide container's cpu request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 19 17:07:24.116: INFO: Waiting up to 5m0s for pod "downwardapi-volume-170dde95-9acf-427f-bd99-b13bb3edf7d0" in namespace "downward-api-7759" to be "Succeeded or Failed" +Oct 19 17:07:24.121: INFO: Pod "downwardapi-volume-170dde95-9acf-427f-bd99-b13bb3edf7d0": Phase="Pending", Reason="", readiness=false. Elapsed: 5.087775ms +Oct 19 17:07:26.124: INFO: Pod "downwardapi-volume-170dde95-9acf-427f-bd99-b13bb3edf7d0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00881062s +STEP: Saw pod success +Oct 19 17:07:26.124: INFO: Pod "downwardapi-volume-170dde95-9acf-427f-bd99-b13bb3edf7d0" satisfied condition "Succeeded or Failed" +Oct 19 17:07:26.127: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod downwardapi-volume-170dde95-9acf-427f-bd99-b13bb3edf7d0 container client-container: +STEP: delete the pod +Oct 19 17:07:26.142: INFO: Waiting for pod downwardapi-volume-170dde95-9acf-427f-bd99-b13bb3edf7d0 to disappear +Oct 19 17:07:26.145: INFO: Pod downwardapi-volume-170dde95-9acf-427f-bd99-b13bb3edf7d0 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:07:26.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-7759" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":346,"completed":267,"skipped":4894,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should mutate custom resource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:07:26.153: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-6778 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 19 17:07:26.889: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 19 17:07:29.910: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should mutate custom resource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 19 17:07:29.914: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Registering the mutating webhook for custom resource e2e-test-webhook-1669-crds.webhook.example.com via the AdmissionRegistration API +STEP: Creating a custom resource that should be mutated by the webhook +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:07:33.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-6778" for this suite. +STEP: Destroying namespace "webhook-6778-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":346,"completed":268,"skipped":4906,"failed":0} +S +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:07:33.165: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-6789 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0644 on tmpfs +Oct 19 17:07:33.308: INFO: Waiting up to 5m0s for pod "pod-0cc40e3d-be39-41a1-8582-bca1048ff82c" in namespace "emptydir-6789" to be "Succeeded or Failed" +Oct 19 17:07:33.312: INFO: Pod "pod-0cc40e3d-be39-41a1-8582-bca1048ff82c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.502795ms +Oct 19 17:07:35.316: INFO: Pod "pod-0cc40e3d-be39-41a1-8582-bca1048ff82c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007477802s +STEP: Saw pod success +Oct 19 17:07:35.316: INFO: Pod "pod-0cc40e3d-be39-41a1-8582-bca1048ff82c" satisfied condition "Succeeded or Failed" +Oct 19 17:07:35.320: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod pod-0cc40e3d-be39-41a1-8582-bca1048ff82c container test-container: +STEP: delete the pod +Oct 19 17:07:35.336: INFO: Waiting for pod pod-0cc40e3d-be39-41a1-8582-bca1048ff82c to disappear +Oct 19 17:07:35.339: INFO: Pod pod-0cc40e3d-be39-41a1-8582-bca1048ff82c no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:07:35.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-6789" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":269,"skipped":4907,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl describe + should check if kubectl describe prints relevant information for rc and pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:07:35.350: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-1208 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should check if kubectl describe prints relevant information for rc and pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 19 17:07:35.487: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-1208 create -f -' +Oct 19 17:07:35.671: INFO: stderr: "" +Oct 19 17:07:35.671: INFO: stdout: "replicationcontroller/agnhost-primary created\n" +Oct 19 17:07:35.671: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-1208 create -f -' +Oct 19 17:07:35.806: INFO: stderr: "" +Oct 19 17:07:35.806: INFO: stdout: "service/agnhost-primary created\n" +STEP: Waiting for Agnhost primary to start. +Oct 19 17:07:36.811: INFO: Selector matched 1 pods for map[app:agnhost] +Oct 19 17:07:36.811: INFO: Found 0 / 1 +Oct 19 17:07:37.811: INFO: Selector matched 1 pods for map[app:agnhost] +Oct 19 17:07:37.811: INFO: Found 1 / 1 +Oct 19 17:07:37.811: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 +Oct 19 17:07:37.814: INFO: Selector matched 1 pods for map[app:agnhost] +Oct 19 17:07:37.814: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +Oct 19 17:07:37.814: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-1208 describe pod agnhost-primary-jd2cz' +Oct 19 17:07:37.874: INFO: stderr: "" +Oct 19 17:07:37.874: INFO: stdout: "Name: agnhost-primary-jd2cz\nNamespace: kubectl-1208\nPriority: 0\nNode: shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9/10.250.1.123\nStart Time: Tue, 19 Oct 2021 17:07:35 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: cni.projectcalico.org/podIP: 100.96.0.59/32\n cni.projectcalico.org/podIPs: 100.96.0.59/32\n kubernetes.io/psp: e2e-test-privileged-psp\nStatus: Running\nIP: 100.96.0.59\nIPs:\n IP: 100.96.0.59\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: containerd://e05ec63c55b2258e50d2d624bc721cb58476093f8a80750a829317e70373d07c\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.32\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Tue, 19 Oct 2021 17:07:36 +0000\n Ready: True\n Restart Count: 0\n Environment:\n KUBERNETES_SERVICE_HOST: api.tmhay-ddd.it.internal.staging.k8s.ondemand.com\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-79cld (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-79cld:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 2s default-scheduler Successfully assigned kubectl-1208/agnhost-primary-jd2cz to shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9\n Normal Pulled 1s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\" already present on machine\n Normal Created 1s kubelet Created container agnhost-primary\n Normal Started 1s kubelet Started container agnhost-primary\n" +Oct 19 17:07:37.874: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-1208 describe rc agnhost-primary' +Oct 19 17:07:37.932: INFO: stderr: "" +Oct 19 17:07:37.932: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-1208\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.32\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 2s replication-controller Created pod: agnhost-primary-jd2cz\n" +Oct 19 17:07:37.932: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-1208 describe service agnhost-primary' +Oct 19 17:07:37.985: INFO: stderr: "" +Oct 19 17:07:37.985: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-1208\nLabels: app=agnhost\n role=primary\nAnnotations: \nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 100.68.83.45\nIPs: 100.68.83.45\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 100.96.0.59:6379\nSession Affinity: None\nEvents: \n" +Oct 19 17:07:37.991: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-1208 describe node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9' +Oct 19 17:07:38.065: INFO: stderr: "" +Oct 19 17:07:38.065: INFO: stdout: "Name: shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9\nRoles: \nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/instance-type=g_c2_m4\n beta.kubernetes.io/os=linux\n failure-domain.beta.kubernetes.io/region=eu-nl-1\n failure-domain.beta.kubernetes.io/zone=eu-nl-1a\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9\n kubernetes.io/os=linux\n node.kubernetes.io/instance-type=g_c2_m4\n node.kubernetes.io/role=node\n topology.cinder.csi.openstack.org/zone=eu-nl-1a\n topology.kubernetes.io/region=eu-nl-1\n topology.kubernetes.io/zone=eu-nl-1a\n worker.garden.sapcloud.io/group=worker-1\n worker.gardener.cloud/cri-name=containerd\n worker.gardener.cloud/pool=worker-1\n worker.gardener.cloud/system-components=true\nAnnotations: checksum/cloud-config-data: 7dcdb79015812d0f299e0cd2f6c071df7574a7954fb364249c228bb1bab45557\n csi.volume.kubernetes.io/nodeid: {\"cinder.csi.openstack.org\":\"6cfd0557-93c6-4986-a02a-5a999fe510f4\"}\n node.alpha.kubernetes.io/ttl: 0\n node.machine.sapcloud.io/last-applied-anno-labels-taints:\n {\"metadata\":{\"creationTimestamp\":null,\"labels\":{\"node.kubernetes.io/role\":\"node\",\"worker.garden.sapcloud.io/group\":\"worker-1\",\"worker.gard...\n projectcalico.org/IPv4Address: 10.250.1.123/19\n projectcalico.org/IPv4IPIPTunnelAddr: 100.96.0.1\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Tue, 19 Oct 2021 15:45:21 +0000\nTaints: \nUnschedulable: false\nLease:\n HolderIdentity: shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9\n AcquireTime: \n RenewTime: Tue, 19 Oct 2021 17:07:33 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n ReadonlyFilesystem False Tue, 19 Oct 2021 17:06:37 +0000 Tue, 19 Oct 2021 16:11:29 +0000 FilesystemIsNotReadOnly Filesystem is not read-only\n CorruptDockerOverlay2 False Tue, 19 Oct 2021 17:06:37 +0000 Tue, 19 Oct 2021 16:11:29 +0000 NoCorruptDockerOverlay2 docker overlay2 is functioning properly\n FrequentUnregisterNetDevice False Tue, 19 Oct 2021 17:06:37 +0000 Tue, 19 Oct 2021 16:11:29 +0000 NoFrequentUnregisterNetDevice node is functioning properly\n FrequentKubeletRestart False Tue, 19 Oct 2021 17:06:37 +0000 Tue, 19 Oct 2021 16:11:29 +0000 NoFrequentKubeletRestart kubelet is functioning properly\n FrequentDockerRestart False Tue, 19 Oct 2021 17:06:37 +0000 Tue, 19 Oct 2021 16:11:29 +0000 NoFrequentDockerRestart docker is functioning properly\n FrequentContainerdRestart False Tue, 19 Oct 2021 17:06:37 +0000 Tue, 19 Oct 2021 16:11:29 +0000 NoFrequentContainerdRestart containerd is functioning properly\n KernelDeadlock False Tue, 19 Oct 2021 17:06:37 +0000 Tue, 19 Oct 2021 16:11:29 +0000 KernelHasNoDeadlock kernel has no deadlock\n NetworkUnavailable False Tue, 19 Oct 2021 15:46:33 +0000 Tue, 19 Oct 2021 15:46:33 +0000 CalicoIsUp Calico is running on this node\n MemoryPressure False Tue, 19 Oct 2021 17:07:35 +0000 Tue, 19 Oct 2021 15:45:21 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Tue, 19 Oct 2021 17:07:35 +0000 Tue, 19 Oct 2021 15:45:21 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Tue, 19 Oct 2021 17:07:35 +0000 Tue, 19 Oct 2021 15:45:21 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Tue, 19 Oct 2021 17:07:35 +0000 Tue, 19 Oct 2021 15:45:51 +0000 KubeletReady kubelet is posting ready status. AppArmor enabled\nAddresses:\n InternalIP: 10.250.1.123\n Hostname: shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9\nCapacity:\n cpu: 2\n ephemeral-storage: 65006904Ki\n example.com/fakecpu: 1k\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 4019288Ki\n pods: 110\nAllocatable:\n cpu: 1920m\n ephemeral-storage: 63238716162\n example.com/fakecpu: 1k\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 2868312Ki\n pods: 110\nSystem Info:\n Machine ID: 14ca72d7ef7545faae6a2a73c16b4a24\n System UUID: f1750142-575d-5968-e9cd-11d266eb65c8\n Boot ID: eab3fbf5-19f2-4757-a04a-46d123b526f6\n Kernel Version: 5.4.0-7-cloud-amd64\n OS Image: Garden Linux 318.9\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.4.4\n Kubelet Version: v1.22.2\n Kube-Proxy Version: v1.22.2\nPodCIDR: 100.96.0.0/24\nPodCIDRs: 100.96.0.0/24\nProviderID: openstack:///6cfd0557-93c6-4986-a02a-5a999fe510f4\nNon-terminated Pods: (10 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system apiserver-proxy-ftftt 40m (2%) 400m (20%) 40Mi (1%) 500Mi (17%) 82m\n kube-system blackbox-exporter-65c549b94c-c5pzd 11m (0%) 44m (2%) 23574998 (0%) 94299992 (3%) 76m\n kube-system calico-kube-controllers-86c64d79ff-hmgq6 10m (0%) 50m (2%) 50Mi (1%) 100Mi (3%) 84m\n kube-system calico-node-gkqll 250m (13%) 800m (41%) 100Mi (3%) 700Mi (24%) 81m\n kube-system calico-typha-deploy-58b94ff46-kljnn 200m (10%) 500m (26%) 100Mi (3%) 700Mi (24%) 84m\n kube-system csi-driver-node-twl5g 40m (2%) 110m (5%) 114Mi (4%) 180Mi (6%) 82m\n kube-system kube-proxy-hgtmc 34m (1%) 92m (4%) 47753748 (1%) 145014992 (4%) 80m\n kube-system node-exporter-v9h4r 50m (2%) 150m (7%) 50Mi (1%) 150Mi (5%) 82m\n kube-system node-problem-detector-2s6bt 11m (0%) 44m (2%) 23574998 (0%) 94299992 (3%) 56m\n kubectl-1208 agnhost-primary-jd2cz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3s\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 646m (33%) 2190m (114%)\n memory 570957248 (19%) 2776797056 (94%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\n example.com/fakecpu 0 0\nEvents: \n" +Oct 19 17:07:38.066: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-1208 describe namespace kubectl-1208' +Oct 19 17:07:38.119: INFO: stderr: "" +Oct 19 17:07:38.119: INFO: stdout: "Name: kubectl-1208\nLabels: e2e-framework=kubectl\n e2e-run=53c206ff-763e-4b70-8a0f-781602aa468c\n kubernetes.io/metadata.name=kubectl-1208\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:07:38.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-1208" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":346,"completed":270,"skipped":4934,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Pods + should support remote command execution over websockets [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:07:38.127: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename pods +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-8873 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:188 +[It] should support remote command execution over websockets [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 19 17:07:38.265: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: creating the pod +STEP: submitting the pod to kubernetes +Oct 19 17:07:38.276: INFO: The status of Pod pod-exec-websocket-8f1a4558-ae4c-486c-8494-df6fed08e46a is Pending, waiting for it to be Running (with Ready = true) +Oct 19 17:07:40.280: INFO: The status of Pod pod-exec-websocket-8f1a4558-ae4c-486c-8494-df6fed08e46a is Running (Ready = true) +[AfterEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:07:40.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-8873" for this suite. +•{"msg":"PASSED [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":346,"completed":271,"skipped":4978,"failed":0} +S +------------------------------ +[sig-api-machinery] Namespaces [Serial] + should ensure that all pods are removed when a namespace is deleted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:07:40.410: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename namespaces +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in namespaces-5798 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should ensure that all pods are removed when a namespace is deleted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a test namespace +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nsdeletetest-7128 +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Creating a pod in the namespace +STEP: Waiting for the pod to have running status +STEP: Deleting the namespace +STEP: Waiting for the namespace to be removed. +STEP: Recreating the namespace +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nsdeletetest-9362 +STEP: Verifying there are no pods in the namespace +[AfterEach] [sig-api-machinery] Namespaces [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:07:53.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "namespaces-5798" for this suite. +STEP: Destroying namespace "nsdeletetest-7128" for this suite. +Oct 19 17:07:53.864: INFO: Namespace nsdeletetest-7128 was already deleted +STEP: Destroying namespace "nsdeletetest-9362" for this suite. +•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":346,"completed":272,"skipped":4979,"failed":0} +SSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be immutable if `immutable` field is set [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:07:53.868: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-8230 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be immutable if `immutable` field is set [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:07:54.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-8230" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","total":346,"completed":273,"skipped":4986,"failed":0} +SS +------------------------------ +[sig-node] Downward API + should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:07:54.038: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-2703 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward api env vars +Oct 19 17:07:54.180: INFO: Waiting up to 5m0s for pod "downward-api-84d1ca4e-5f0f-4af9-b228-6a77eb08a757" in namespace "downward-api-2703" to be "Succeeded or Failed" +Oct 19 17:07:54.192: INFO: Pod "downward-api-84d1ca4e-5f0f-4af9-b228-6a77eb08a757": Phase="Pending", Reason="", readiness=false. Elapsed: 11.456498ms +Oct 19 17:07:56.197: INFO: Pod "downward-api-84d1ca4e-5f0f-4af9-b228-6a77eb08a757": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.016886307s +STEP: Saw pod success +Oct 19 17:07:56.197: INFO: Pod "downward-api-84d1ca4e-5f0f-4af9-b228-6a77eb08a757" satisfied condition "Succeeded or Failed" +Oct 19 17:07:56.202: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod downward-api-84d1ca4e-5f0f-4af9-b228-6a77eb08a757 container dapi-container: +STEP: delete the pod +Oct 19 17:07:56.222: INFO: Waiting for pod downward-api-84d1ca4e-5f0f-4af9-b228-6a77eb08a757 to disappear +Oct 19 17:07:56.225: INFO: Pod downward-api-84d1ca4e-5f0f-4af9-b228-6a77eb08a757 no longer exists +[AfterEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:07:56.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-2703" for this suite. +•{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":346,"completed":274,"skipped":4988,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-node] Downward API + should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:07:56.244: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-3240 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward api env vars +Oct 19 17:07:56.388: INFO: Waiting up to 5m0s for pod "downward-api-465f586e-64a4-4d90-b649-ccd6e4a428ee" in namespace "downward-api-3240" to be "Succeeded or Failed" +Oct 19 17:07:56.392: INFO: Pod "downward-api-465f586e-64a4-4d90-b649-ccd6e4a428ee": Phase="Pending", Reason="", readiness=false. Elapsed: 4.215863ms +Oct 19 17:07:58.408: INFO: Pod "downward-api-465f586e-64a4-4d90-b649-ccd6e4a428ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.019533154s +STEP: Saw pod success +Oct 19 17:07:58.408: INFO: Pod "downward-api-465f586e-64a4-4d90-b649-ccd6e4a428ee" satisfied condition "Succeeded or Failed" +Oct 19 17:07:58.411: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod downward-api-465f586e-64a4-4d90-b649-ccd6e4a428ee container dapi-container: +STEP: delete the pod +Oct 19 17:07:58.424: INFO: Waiting for pod downward-api-465f586e-64a4-4d90-b649-ccd6e4a428ee to disappear +Oct 19 17:07:58.427: INFO: Pod downward-api-465f586e-64a4-4d90-b649-ccd6e4a428ee no longer exists +[AfterEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:07:58.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-3240" for this suite. +•{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":346,"completed":275,"skipped":4999,"failed":0} +SSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:07:58.436: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-553 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 19 17:07:58.581: INFO: Waiting up to 5m0s for pod "downwardapi-volume-47a2e72e-2bf7-4a48-beec-8dad1d2742f4" in namespace "downward-api-553" to be "Succeeded or Failed" +Oct 19 17:07:58.584: INFO: Pod "downwardapi-volume-47a2e72e-2bf7-4a48-beec-8dad1d2742f4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.710299ms +Oct 19 17:08:00.588: INFO: Pod "downwardapi-volume-47a2e72e-2bf7-4a48-beec-8dad1d2742f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007727254s +STEP: Saw pod success +Oct 19 17:08:00.588: INFO: Pod "downwardapi-volume-47a2e72e-2bf7-4a48-beec-8dad1d2742f4" satisfied condition "Succeeded or Failed" +Oct 19 17:08:00.591: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod downwardapi-volume-47a2e72e-2bf7-4a48-beec-8dad1d2742f4 container client-container: +STEP: delete the pod +Oct 19 17:08:00.609: INFO: Waiting for pod downwardapi-volume-47a2e72e-2bf7-4a48-beec-8dad1d2742f4 to disappear +Oct 19 17:08:00.612: INFO: Pod downwardapi-volume-47a2e72e-2bf7-4a48-beec-8dad1d2742f4 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:08:00.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-553" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":346,"completed":276,"skipped":5006,"failed":0} +SSSSSSS +------------------------------ +[sig-node] Probing container + should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:08:00.622: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-probe +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-267 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 +[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating pod busybox-86b86e39-5086-46af-8a23-185c6b192222 in namespace container-probe-267 +Oct 19 17:08:02.774: INFO: Started pod busybox-86b86e39-5086-46af-8a23-185c6b192222 in namespace container-probe-267 +STEP: checking the pod's current state and verifying that restartCount is present +Oct 19 17:08:02.777: INFO: Initial restart count of pod busybox-86b86e39-5086-46af-8a23-185c6b192222 is 0 +STEP: deleting the pod +[AfterEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:12:03.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-267" for this suite. +•{"msg":"PASSED [sig-node] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":346,"completed":277,"skipped":5013,"failed":0} +SS +------------------------------ +[sig-apps] CronJob + should support CronJob API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:12:03.476: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename cronjob +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in cronjob-8272 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support CronJob API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a cronjob +STEP: creating +STEP: getting +STEP: listing +STEP: watching +Oct 19 17:12:03.620: INFO: starting watch +STEP: cluster-wide listing +STEP: cluster-wide watching +Oct 19 17:12:03.626: INFO: starting watch +STEP: patching +STEP: updating +Oct 19 17:12:03.642: INFO: waiting for watch events with expected annotations +Oct 19 17:12:03.642: INFO: saw patched and updated annotations +STEP: patching /status +STEP: updating /status +STEP: get /status +STEP: deleting +STEP: deleting a collection +[AfterEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:12:03.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "cronjob-8272" for this suite. +•{"msg":"PASSED [sig-apps] CronJob should support CronJob API operations [Conformance]","total":346,"completed":278,"skipped":5015,"failed":0} +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide container's cpu limit [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:12:03.680: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-9041 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should provide container's cpu limit [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 19 17:12:03.829: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b93df1e0-2e10-4ba4-95ba-1da31be4e8dd" in namespace "downward-api-9041" to be "Succeeded or Failed" +Oct 19 17:12:03.836: INFO: Pod "downwardapi-volume-b93df1e0-2e10-4ba4-95ba-1da31be4e8dd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.250699ms +Oct 19 17:12:05.840: INFO: Pod "downwardapi-volume-b93df1e0-2e10-4ba4-95ba-1da31be4e8dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010091297s +STEP: Saw pod success +Oct 19 17:12:05.840: INFO: Pod "downwardapi-volume-b93df1e0-2e10-4ba4-95ba-1da31be4e8dd" satisfied condition "Succeeded or Failed" +Oct 19 17:12:05.843: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod downwardapi-volume-b93df1e0-2e10-4ba4-95ba-1da31be4e8dd container client-container: +STEP: delete the pod +Oct 19 17:12:05.859: INFO: Waiting for pod downwardapi-volume-b93df1e0-2e10-4ba4-95ba-1da31be4e8dd to disappear +Oct 19 17:12:05.862: INFO: Pod downwardapi-volume-b93df1e0-2e10-4ba4-95ba-1da31be4e8dd no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:12:05.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-9041" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":346,"completed":279,"skipped":5037,"failed":0} +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Security Context When creating a container with runAsUser + should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:12:05.871: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename security-context-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-test-8841 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 +[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 19 17:12:06.015: INFO: Waiting up to 5m0s for pod "busybox-user-65534-af328c10-507f-4840-9377-3504a9e27d04" in namespace "security-context-test-8841" to be "Succeeded or Failed" +Oct 19 17:12:06.018: INFO: Pod "busybox-user-65534-af328c10-507f-4840-9377-3504a9e27d04": Phase="Pending", Reason="", readiness=false. Elapsed: 3.200963ms +Oct 19 17:12:08.023: INFO: Pod "busybox-user-65534-af328c10-507f-4840-9377-3504a9e27d04": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007694917s +Oct 19 17:12:08.023: INFO: Pod "busybox-user-65534-af328c10-507f-4840-9377-3504a9e27d04" satisfied condition "Succeeded or Failed" +[AfterEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:12:08.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "security-context-test-8841" for this suite. +•{"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":280,"skipped":5055,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Deployment + Deployment should have a working scale subresource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:12:08.032: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename deployment +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in deployment-7590 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 +[It] Deployment should have a working scale subresource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 19 17:12:08.169: INFO: Creating simple deployment test-new-deployment +Oct 19 17:12:08.179: INFO: deployment "test-new-deployment" doesn't have the required revision set +STEP: getting scale subresource +STEP: updating a scale subresource +STEP: verifying the deployment Spec.Replicas was modified +STEP: Patch a scale subresource +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 +Oct 19 17:12:10.214: INFO: Deployment "test-new-deployment": +&Deployment{ObjectMeta:{test-new-deployment deployment-7590 c2fa4509-0693-4416-b13d-23f9433d85ad 36939 3 2021-10-19 17:12:08 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 FieldsV1 {"f:spec":{"f:replicas":{}}} scale} {e2e.test Update apps/v1 2021-10-19 17:12:08 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-19 17:12:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*4,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00293a868 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-new-deployment-847dcfb7fb" has successfully progressed.,LastUpdateTime:2021-10-19 17:12:09 +0000 UTC,LastTransitionTime:2021-10-19 17:12:08 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2021-10-19 17:12:10 +0000 UTC,LastTransitionTime:2021-10-19 17:12:10 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} + +Oct 19 17:12:10.218: INFO: New ReplicaSet "test-new-deployment-847dcfb7fb" of Deployment "test-new-deployment": +&ReplicaSet{ObjectMeta:{test-new-deployment-847dcfb7fb deployment-7590 69a065cf-85ae-4a01-96f9-9d0044ec18c7 36945 3 2021-10-19 17:12:08 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[deployment.kubernetes.io/desired-replicas:4 deployment.kubernetes.io/max-replicas:5 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-new-deployment c2fa4509-0693-4416-b13d-23f9433d85ad 0xc00293ac67 0xc00293ac68}] [] [{kube-controller-manager Update apps/v1 2021-10-19 17:12:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c2fa4509-0693-4416-b13d-23f9433d85ad\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-19 17:12:09 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*4,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 847dcfb7fb,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00293acf8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} +Oct 19 17:12:10.221: INFO: Pod "test-new-deployment-847dcfb7fb-7dcqw" is not available: +&Pod{ObjectMeta:{test-new-deployment-847dcfb7fb-7dcqw test-new-deployment-847dcfb7fb- deployment-7590 b35882a9-4e0c-4393-aa65-c3ed855b322a 36942 0 2021-10-19 17:12:10 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet test-new-deployment-847dcfb7fb 69a065cf-85ae-4a01-96f9-9d0044ec18c7 0xc00293b0a7 0xc00293b0a8}] [] [{kube-controller-manager Update v1 2021-10-19 17:12:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"69a065cf-85ae-4a01-96f9-9d0044ec18c7\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-9hr6z,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmhay-ddd.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9hr6z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmhay-ddd-worker-1-z1-67558-zh8gq,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 17:12:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 19 17:12:10.221: INFO: Pod "test-new-deployment-847dcfb7fb-b8kvv" is available: +&Pod{ObjectMeta:{test-new-deployment-847dcfb7fb-b8kvv test-new-deployment-847dcfb7fb- deployment-7590 b50ea315-8a1c-4e84-a81b-868ff1c40e66 36933 0 2021-10-19 17:12:08 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/podIP:100.96.0.68/32 cni.projectcalico.org/podIPs:100.96.0.68/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet test-new-deployment-847dcfb7fb 69a065cf-85ae-4a01-96f9-9d0044ec18c7 0xc00293b220 0xc00293b221}] [] [{calico Update v1 2021-10-19 17:12:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kube-controller-manager Update v1 2021-10-19 17:12:08 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"69a065cf-85ae-4a01-96f9-9d0044ec18c7\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-19 17:12:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.0.68\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-5hxpb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmhay-ddd.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5hxpb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 17:12:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 17:12:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 17:12:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 17:12:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.1.123,PodIP:100.96.0.68,StartTime:2021-10-19 17:12:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-19 17:12:08 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://3795e68d97659968db3103785f78b7f25dc231e9ece7fcf5652e70a0c6eb816f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.0.68,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:12:10.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-7590" for this suite. +•{"msg":"PASSED [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]","total":346,"completed":281,"skipped":5080,"failed":0} +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] InitContainer [NodeConformance] + should invoke init containers on a RestartAlways pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:12:10.231: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename init-container +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in init-container-8491 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 +[It] should invoke init containers on a RestartAlways pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pod +Oct 19 17:12:10.386: INFO: PodSpec: initContainers in spec.initContainers +[AfterEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:12:13.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "init-container-8491" for this suite. +•{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":346,"completed":282,"skipped":5099,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Probing container + should have monotonically increasing restart count [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:12:13.543: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-probe +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-6289 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 +[It] should have monotonically increasing restart count [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating pod liveness-a7d1a5a3-46e5-4680-a00f-3d8a9c1125bc in namespace container-probe-6289 +Oct 19 17:12:15.693: INFO: Started pod liveness-a7d1a5a3-46e5-4680-a00f-3d8a9c1125bc in namespace container-probe-6289 +STEP: checking the pod's current state and verifying that restartCount is present +Oct 19 17:12:15.695: INFO: Initial restart count of pod liveness-a7d1a5a3-46e5-4680-a00f-3d8a9c1125bc is 0 +Oct 19 17:12:35.744: INFO: Restart count of pod container-probe-6289/liveness-a7d1a5a3-46e5-4680-a00f-3d8a9c1125bc is now 1 (20.048850105s elapsed) +Oct 19 17:12:55.820: INFO: Restart count of pod container-probe-6289/liveness-a7d1a5a3-46e5-4680-a00f-3d8a9c1125bc is now 2 (40.124416489s elapsed) +Oct 19 17:13:15.878: INFO: Restart count of pod container-probe-6289/liveness-a7d1a5a3-46e5-4680-a00f-3d8a9c1125bc is now 3 (1m0.182767051s elapsed) +Oct 19 17:13:35.947: INFO: Restart count of pod container-probe-6289/liveness-a7d1a5a3-46e5-4680-a00f-3d8a9c1125bc is now 4 (1m20.251970869s elapsed) +Oct 19 17:14:42.200: INFO: Restart count of pod container-probe-6289/liveness-a7d1a5a3-46e5-4680-a00f-3d8a9c1125bc is now 5 (2m26.504255946s elapsed) +STEP: deleting the pod +[AfterEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:14:42.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-6289" for this suite. +•{"msg":"PASSED [sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":346,"completed":283,"skipped":5129,"failed":0} +SSS +------------------------------ +[sig-apps] CronJob + should replace jobs when ReplaceConcurrent [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:14:42.217: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename cronjob +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in cronjob-2903 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should replace jobs when ReplaceConcurrent [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a ReplaceConcurrent cronjob +STEP: Ensuring a job is scheduled +STEP: Ensuring exactly one is scheduled +STEP: Ensuring exactly one running job exists by listing jobs explicitly +STEP: Ensuring the job is replaced with a new one +STEP: Removing cronjob +[AfterEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:16:00.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "cronjob-2903" for this suite. +•{"msg":"PASSED [sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","total":346,"completed":284,"skipped":5132,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:16:00.392: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-4854 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0644 on tmpfs +Oct 19 17:16:00.535: INFO: Waiting up to 5m0s for pod "pod-aa8b90e3-9887-4f43-93f4-57ba53676cc7" in namespace "emptydir-4854" to be "Succeeded or Failed" +Oct 19 17:16:00.538: INFO: Pod "pod-aa8b90e3-9887-4f43-93f4-57ba53676cc7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.905573ms +Oct 19 17:16:02.542: INFO: Pod "pod-aa8b90e3-9887-4f43-93f4-57ba53676cc7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006658659s +STEP: Saw pod success +Oct 19 17:16:02.542: INFO: Pod "pod-aa8b90e3-9887-4f43-93f4-57ba53676cc7" satisfied condition "Succeeded or Failed" +Oct 19 17:16:02.545: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod pod-aa8b90e3-9887-4f43-93f4-57ba53676cc7 container test-container: +STEP: delete the pod +Oct 19 17:16:02.601: INFO: Waiting for pod pod-aa8b90e3-9887-4f43-93f4-57ba53676cc7 to disappear +Oct 19 17:16:02.604: INFO: Pod pod-aa8b90e3-9887-4f43-93f4-57ba53676cc7 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:16:02.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-4854" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":285,"skipped":5155,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] PodTemplates + should run the lifecycle of PodTemplates [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] PodTemplates + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:16:02.613: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename podtemplate +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in podtemplate-2133 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should run the lifecycle of PodTemplates [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-node] PodTemplates + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:16:02.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "podtemplate-2133" for this suite. +•{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":346,"completed":286,"skipped":5184,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should complete a service status lifecycle [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:16:02.780: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-4115 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should complete a service status lifecycle [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a Service +STEP: watching for the Service to be added +Oct 19 17:16:02.926: INFO: Found Service test-service-gv2vj in namespace services-4115 with labels: map[test-service-static:true] & ports [{http TCP 80 {0 80 } 0}] +Oct 19 17:16:02.926: INFO: Service test-service-gv2vj created +STEP: Getting /status +Oct 19 17:16:02.929: INFO: Service test-service-gv2vj has LoadBalancer: {[]} +STEP: patching the ServiceStatus +STEP: watching for the Service to be patched +Oct 19 17:16:02.936: INFO: observed Service test-service-gv2vj in namespace services-4115 with annotations: map[] & LoadBalancer: {[]} +Oct 19 17:16:02.936: INFO: Found Service test-service-gv2vj in namespace services-4115 with annotations: map[patchedstatus:true] & LoadBalancer: {[{203.0.113.1 []}]} +Oct 19 17:16:02.936: INFO: Service test-service-gv2vj has service status patched +STEP: updating the ServiceStatus +Oct 19 17:16:02.943: INFO: updatedStatus.Conditions: []v1.Condition{v1.Condition{Type:"StatusUpdate", Status:"True", ObservedGeneration:0, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"E2E", Message:"Set from e2e test"}} +STEP: watching for the Service to be updated +Oct 19 17:16:02.946: INFO: Observed Service test-service-gv2vj in namespace services-4115 with annotations: map[] & Conditions: {[]} +Oct 19 17:16:02.946: INFO: Observed event: &Service{ObjectMeta:{test-service-gv2vj services-4115 2f1cbc5d-db78-402e-921f-6858ff4ec111 38178 0 2021-10-19 17:16:02 +0000 UTC map[test-service-static:true] map[patchedstatus:true] [] [] [{e2e.test Update v1 2021-10-19 17:16:02 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:test-service-static":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:sessionAffinity":{},"f:type":{}}} } {e2e.test Update v1 2021-10-19 17:16:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:patchedstatus":{}}},"f:status":{"f:loadBalancer":{"f:ingress":{}}}} status}]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{0 80 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:100.71.8.132,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[100.71.8.132],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{LoadBalancerIngress{IP:203.0.113.1,Hostname:,Ports:[]PortStatus{},},},},Conditions:[]Condition{},},} +Oct 19 17:16:02.946: INFO: Found Service test-service-gv2vj in namespace services-4115 with annotations: map[patchedstatus:true] & Conditions: [{StatusUpdate True 0 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] +Oct 19 17:16:02.946: INFO: Service test-service-gv2vj has service status updated +STEP: patching the service +STEP: watching for the Service to be patched +Oct 19 17:16:02.953: INFO: observed Service test-service-gv2vj in namespace services-4115 with labels: map[test-service-static:true] +Oct 19 17:16:02.953: INFO: observed Service test-service-gv2vj in namespace services-4115 with labels: map[test-service-static:true] +Oct 19 17:16:02.953: INFO: observed Service test-service-gv2vj in namespace services-4115 with labels: map[test-service-static:true] +Oct 19 17:16:02.953: INFO: Found Service test-service-gv2vj in namespace services-4115 with labels: map[test-service:patched test-service-static:true] +Oct 19 17:16:02.953: INFO: Service test-service-gv2vj patched +STEP: deleting the service +STEP: watching for the Service to be deleted +Oct 19 17:16:02.962: INFO: Observed event: ADDED +Oct 19 17:16:02.962: INFO: Observed event: MODIFIED +Oct 19 17:16:02.962: INFO: Observed event: MODIFIED +Oct 19 17:16:02.962: INFO: Observed event: MODIFIED +Oct 19 17:16:02.962: INFO: Found Service test-service-gv2vj in namespace services-4115 with labels: map[test-service:patched test-service-static:true] & annotations: map[patchedstatus:true] +Oct 19 17:16:02.962: INFO: Service test-service-gv2vj deleted +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:16:02.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-4115" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should complete a service status lifecycle [Conformance]","total":346,"completed":287,"skipped":5198,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-apps] DisruptionController + should create a PodDisruptionBudget [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:16:02.969: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename disruption +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in disruption-1939 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 +[It] should create a PodDisruptionBudget [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pdb +STEP: Waiting for the pdb to be processed +STEP: updating the pdb +STEP: Waiting for the pdb to be processed +STEP: patching the pdb +STEP: Waiting for the pdb to be processed +STEP: Waiting for the pdb to be deleted +[AfterEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:16:03.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "disruption-1939" for this suite. +•{"msg":"PASSED [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]","total":346,"completed":288,"skipped":5211,"failed":0} +SSSSS +------------------------------ +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition + creating/deleting custom resource definition objects works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:16:03.146: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename custom-resource-definition +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in custom-resource-definition-6259 +STEP: Waiting for a default service account to be provisioned in namespace +[It] creating/deleting custom resource definition objects works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 19 17:16:03.278: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:16:04.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "custom-resource-definition-6259" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":346,"completed":289,"skipped":5216,"failed":0} +SS +------------------------------ +[sig-node] Docker Containers + should use the image defaults if command and args are blank [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Docker Containers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:16:04.313: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename containers +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in containers-3932 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-node] Docker Containers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:16:06.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "containers-3932" for this suite. +•{"msg":"PASSED [sig-node] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":346,"completed":290,"skipped":5218,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:16:06.591: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-6596 +STEP: Waiting for a default service account to be provisioned in namespace +[It] optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name cm-test-opt-del-5622a299-b4bd-4b8a-91a8-e1fd487a962a +STEP: Creating configMap with name cm-test-opt-upd-0c3e323c-c609-4825-b6ef-f1461288ec5d +STEP: Creating the pod +Oct 19 17:16:06.776: INFO: The status of Pod pod-projected-configmaps-5c148cf9-542d-4b7d-aa1f-9477a53376e8 is Pending, waiting for it to be Running (with Ready = true) +Oct 19 17:16:08.780: INFO: The status of Pod pod-projected-configmaps-5c148cf9-542d-4b7d-aa1f-9477a53376e8 is Running (Ready = true) +STEP: Deleting configmap cm-test-opt-del-5622a299-b4bd-4b8a-91a8-e1fd487a962a +STEP: Updating configmap cm-test-opt-upd-0c3e323c-c609-4825-b6ef-f1461288ec5d +STEP: Creating configMap with name cm-test-opt-create-c88d0487-cb8c-4a4e-a0a0-fe46d09f4c9b +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:16:10.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-6596" for this suite. +•{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":291,"skipped":5269,"failed":0} +SSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should be able to update and delete ResourceQuota. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:16:10.933: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename resourcequota +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-2661 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to update and delete ResourceQuota. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a ResourceQuota +STEP: Getting a ResourceQuota +STEP: Updating a ResourceQuota +STEP: Verifying a ResourceQuota was modified +STEP: Deleting a ResourceQuota +STEP: Verifying the deleted ResourceQuota +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:16:11.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-2661" for this suite. +•{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":346,"completed":292,"skipped":5277,"failed":0} +S +------------------------------ +[sig-network] EndpointSlice + should support creating EndpointSlice API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:16:11.104: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename endpointslice +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in endpointslice-1871 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 +[It] should support creating EndpointSlice API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: getting /apis +STEP: getting /apis/discovery.k8s.io +STEP: getting /apis/discovery.k8s.iov1 +STEP: creating +STEP: getting +STEP: listing +STEP: watching +Oct 19 17:16:11.274: INFO: starting watch +STEP: cluster-wide listing +STEP: cluster-wide watching +Oct 19 17:16:11.280: INFO: starting watch +STEP: patching +STEP: updating +Oct 19 17:16:11.293: INFO: waiting for watch events with expected annotations +Oct 19 17:16:11.293: INFO: saw patched and updated annotations +STEP: deleting +STEP: deleting a collection +[AfterEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:16:11.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "endpointslice-1871" for this suite. +•{"msg":"PASSED [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","total":346,"completed":293,"skipped":5278,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-node] Container Runtime blackbox test when starting a container that exits + should run with the expected status [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:16:11.322: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-runtime +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-9772 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should run with the expected status [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' +STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' +STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition +STEP: Container 'terminate-cmd-rpa': should get the expected 'State' +STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] +STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' +STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' +STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition +STEP: Container 'terminate-cmd-rpof': should get the expected 'State' +STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] +STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' +STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' +STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition +STEP: Container 'terminate-cmd-rpn': should get the expected 'State' +STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] +[AfterEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:16:32.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-runtime-9772" for this suite. +•{"msg":"PASSED [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":346,"completed":294,"skipped":5291,"failed":0} +SSSSSSSSS +------------------------------ +[sig-api-machinery] Watchers + should be able to restart watching from the last resource version observed by the previous watch [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:16:32.692: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename watch +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in watch-7241 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a watch on configmaps +STEP: creating a new configmap +STEP: modifying the configmap once +STEP: closing the watch once it receives two notifications +Oct 19 17:16:32.838: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7241 f9b09db1-2ac8-42f7-b30f-99391d6d5d54 38545 0 2021-10-19 17:16:32 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-10-19 17:16:32 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 19 17:16:32.838: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7241 f9b09db1-2ac8-42f7-b30f-99391d6d5d54 38546 0 2021-10-19 17:16:32 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-10-19 17:16:32 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: modifying the configmap a second time, while the watch is closed +STEP: creating a new watch on configmaps from the last resource version observed by the first watch +STEP: deleting the configmap +STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed +Oct 19 17:16:32.851: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7241 f9b09db1-2ac8-42f7-b30f-99391d6d5d54 38547 0 2021-10-19 17:16:32 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-10-19 17:16:32 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 19 17:16:32.851: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7241 f9b09db1-2ac8-42f7-b30f-99391d6d5d54 38548 0 2021-10-19 17:16:32 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-10-19 17:16:32 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +[AfterEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:16:32.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "watch-7241" for this suite. +•{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":346,"completed":295,"skipped":5300,"failed":0} +SSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:16:32.859: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-3521 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name configmap-test-volume-814f874d-5252-476b-ac15-77dd359d8f48 +STEP: Creating a pod to test consume configMaps +Oct 19 17:16:33.005: INFO: Waiting up to 5m0s for pod "pod-configmaps-d28ef9d5-7487-4397-a687-7499581a39f5" in namespace "configmap-3521" to be "Succeeded or Failed" +Oct 19 17:16:33.008: INFO: Pod "pod-configmaps-d28ef9d5-7487-4397-a687-7499581a39f5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.334125ms +Oct 19 17:16:35.013: INFO: Pod "pod-configmaps-d28ef9d5-7487-4397-a687-7499581a39f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008143248s +STEP: Saw pod success +Oct 19 17:16:35.013: INFO: Pod "pod-configmaps-d28ef9d5-7487-4397-a687-7499581a39f5" satisfied condition "Succeeded or Failed" +Oct 19 17:16:35.016: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod pod-configmaps-d28ef9d5-7487-4397-a687-7499581a39f5 container agnhost-container: +STEP: delete the pod +Oct 19 17:16:35.029: INFO: Waiting for pod pod-configmaps-d28ef9d5-7487-4397-a687-7499581a39f5 to disappear +Oct 19 17:16:35.031: INFO: Pod pod-configmaps-d28ef9d5-7487-4397-a687-7499581a39f5 no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:16:35.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-3521" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":346,"completed":296,"skipped":5307,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Downward API + should provide host IP as an env var [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:16:35.040: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-6171 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide host IP as an env var [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward api env vars +Oct 19 17:16:35.183: INFO: Waiting up to 5m0s for pod "downward-api-be273bf6-9c6f-4755-9136-2df38482809c" in namespace "downward-api-6171" to be "Succeeded or Failed" +Oct 19 17:16:35.186: INFO: Pod "downward-api-be273bf6-9c6f-4755-9136-2df38482809c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.892464ms +Oct 19 17:16:37.190: INFO: Pod "downward-api-be273bf6-9c6f-4755-9136-2df38482809c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006787528s +STEP: Saw pod success +Oct 19 17:16:37.190: INFO: Pod "downward-api-be273bf6-9c6f-4755-9136-2df38482809c" satisfied condition "Succeeded or Failed" +Oct 19 17:16:37.193: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod downward-api-be273bf6-9c6f-4755-9136-2df38482809c container dapi-container: +STEP: delete the pod +Oct 19 17:16:37.204: INFO: Waiting for pod downward-api-be273bf6-9c6f-4755-9136-2df38482809c to disappear +Oct 19 17:16:37.207: INFO: Pod downward-api-be273bf6-9c6f-4755-9136-2df38482809c no longer exists +[AfterEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:16:37.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-6171" for this suite. +•{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":346,"completed":297,"skipped":5340,"failed":0} +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicationController + should surface a failure condition on a common issue like exceeded quota [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:16:37.216: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename replication-controller +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replication-controller-557 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 +[It] should surface a failure condition on a common issue like exceeded quota [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 19 17:16:37.351: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace +STEP: Creating rc "condition-test" that asks for more than the allowed pod quota +STEP: Checking rc "condition-test" has the desired failure condition set +STEP: Scaling down rc "condition-test" to satisfy pod quota +Oct 19 17:16:38.376: INFO: Updating replication controller "condition-test" +STEP: Checking rc "condition-test" has no failure condition set +[AfterEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:16:38.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replication-controller-557" for this suite. +•{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":346,"completed":298,"skipped":5361,"failed":0} +SSSSS +------------------------------ +[sig-network] Services + should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:16:38.389: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-1088 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating service in namespace services-1088 +STEP: creating service affinity-nodeport-transition in namespace services-1088 +STEP: creating replication controller affinity-nodeport-transition in namespace services-1088 +I1019 17:16:38.543533 4339 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-1088, replica count: 3 +I1019 17:16:41.594988 4339 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Oct 19 17:16:41.608: INFO: Creating new exec pod +Oct 19 17:16:44.629: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-1088 exec execpod-affinityw9px5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' +Oct 19 17:16:45.058: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" +Oct 19 17:16:45.058: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 19 17:16:45.058: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-1088 exec execpod-affinityw9px5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.65.77.58 80' +Oct 19 17:16:45.321: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 100.65.77.58 80\nConnection to 100.65.77.58 80 port [tcp/http] succeeded!\n" +Oct 19 17:16:45.321: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 19 17:16:45.321: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-1088 exec execpod-affinityw9px5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.250.1.123 31845' +Oct 19 17:16:45.510: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.250.1.123 31845\nConnection to 10.250.1.123 31845 port [tcp/*] succeeded!\n" +Oct 19 17:16:45.510: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 19 17:16:45.510: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-1088 exec execpod-affinityw9px5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.250.3.120 31845' +Oct 19 17:16:45.666: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.250.3.120 31845\nConnection to 10.250.3.120 31845 port [tcp/*] succeeded!\n" +Oct 19 17:16:45.666: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Oct 19 17:16:45.674: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-1088 exec execpod-affinityw9px5 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.250.1.123:31845/ ; done' +Oct 19 17:16:45.893: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31845/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31845/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31845/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31845/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31845/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31845/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31845/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31845/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31845/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31845/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31845/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31845/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31845/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31845/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31845/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31845/\n" +Oct 19 17:16:45.893: INFO: stdout: "\naffinity-nodeport-transition-2pbg6\naffinity-nodeport-transition-2pbg6\naffinity-nodeport-transition-2pbg6\naffinity-nodeport-transition-2pbg6\naffinity-nodeport-transition-2pbg6\naffinity-nodeport-transition-2pbg6\naffinity-nodeport-transition-2pbg6\naffinity-nodeport-transition-2pbg6\naffinity-nodeport-transition-2pbg6\naffinity-nodeport-transition-2pbg6\naffinity-nodeport-transition-2pbg6\naffinity-nodeport-transition-2pbg6\naffinity-nodeport-transition-2pbg6\naffinity-nodeport-transition-2pbg6\naffinity-nodeport-transition-2pbg6\naffinity-nodeport-transition-2pbg6" +Oct 19 17:16:45.893: INFO: Received response from host: affinity-nodeport-transition-2pbg6 +Oct 19 17:16:45.893: INFO: Received response from host: affinity-nodeport-transition-2pbg6 +Oct 19 17:16:45.893: INFO: Received response from host: affinity-nodeport-transition-2pbg6 +Oct 19 17:16:45.893: INFO: Received response from host: affinity-nodeport-transition-2pbg6 +Oct 19 17:16:45.893: INFO: Received response from host: affinity-nodeport-transition-2pbg6 +Oct 19 17:16:45.893: INFO: Received response from host: affinity-nodeport-transition-2pbg6 +Oct 19 17:16:45.893: INFO: Received response from host: affinity-nodeport-transition-2pbg6 +Oct 19 17:16:45.893: INFO: Received response from host: affinity-nodeport-transition-2pbg6 +Oct 19 17:16:45.893: INFO: Received response from host: affinity-nodeport-transition-2pbg6 +Oct 19 17:16:45.893: INFO: Received response from host: affinity-nodeport-transition-2pbg6 +Oct 19 17:16:45.893: INFO: Received response from host: affinity-nodeport-transition-2pbg6 +Oct 19 17:16:45.893: INFO: Received response from host: affinity-nodeport-transition-2pbg6 +Oct 19 17:16:45.893: INFO: Received response from host: affinity-nodeport-transition-2pbg6 +Oct 19 17:16:45.893: INFO: Received response from host: affinity-nodeport-transition-2pbg6 +Oct 19 17:16:45.893: INFO: Received response from host: affinity-nodeport-transition-2pbg6 +Oct 19 17:16:45.893: INFO: Received response from host: affinity-nodeport-transition-2pbg6 +Oct 19 17:17:15.894: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-1088 exec execpod-affinityw9px5 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.250.1.123:31845/ ; done' +Oct 19 17:17:16.227: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31845/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31845/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31845/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31845/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31845/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31845/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31845/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31845/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31845/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31845/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31845/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31845/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31845/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31845/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31845/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31845/\n" +Oct 19 17:17:16.227: INFO: stdout: "\naffinity-nodeport-transition-s6smm\naffinity-nodeport-transition-5jwb7\naffinity-nodeport-transition-5jwb7\naffinity-nodeport-transition-2pbg6\naffinity-nodeport-transition-5jwb7\naffinity-nodeport-transition-s6smm\naffinity-nodeport-transition-s6smm\naffinity-nodeport-transition-2pbg6\naffinity-nodeport-transition-2pbg6\naffinity-nodeport-transition-s6smm\naffinity-nodeport-transition-5jwb7\naffinity-nodeport-transition-5jwb7\naffinity-nodeport-transition-2pbg6\naffinity-nodeport-transition-s6smm\naffinity-nodeport-transition-5jwb7\naffinity-nodeport-transition-s6smm" +Oct 19 17:17:16.227: INFO: Received response from host: affinity-nodeport-transition-s6smm +Oct 19 17:17:16.227: INFO: Received response from host: affinity-nodeport-transition-5jwb7 +Oct 19 17:17:16.227: INFO: Received response from host: affinity-nodeport-transition-5jwb7 +Oct 19 17:17:16.227: INFO: Received response from host: affinity-nodeport-transition-2pbg6 +Oct 19 17:17:16.227: INFO: Received response from host: affinity-nodeport-transition-5jwb7 +Oct 19 17:17:16.227: INFO: Received response from host: affinity-nodeport-transition-s6smm +Oct 19 17:17:16.227: INFO: Received response from host: affinity-nodeport-transition-s6smm +Oct 19 17:17:16.227: INFO: Received response from host: affinity-nodeport-transition-2pbg6 +Oct 19 17:17:16.227: INFO: Received response from host: affinity-nodeport-transition-2pbg6 +Oct 19 17:17:16.227: INFO: Received response from host: affinity-nodeport-transition-s6smm +Oct 19 17:17:16.227: INFO: Received response from host: affinity-nodeport-transition-5jwb7 +Oct 19 17:17:16.227: INFO: Received response from host: affinity-nodeport-transition-5jwb7 +Oct 19 17:17:16.227: INFO: Received response from host: affinity-nodeport-transition-2pbg6 +Oct 19 17:17:16.227: INFO: Received response from host: affinity-nodeport-transition-s6smm +Oct 19 17:17:16.227: INFO: Received response from host: affinity-nodeport-transition-5jwb7 +Oct 19 17:17:16.227: INFO: Received response from host: affinity-nodeport-transition-s6smm +Oct 19 17:17:16.237: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-1088 exec execpod-affinityw9px5 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.250.1.123:31845/ ; done' +Oct 19 17:17:16.527: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31845/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31845/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31845/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31845/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31845/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31845/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31845/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31845/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31845/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31845/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31845/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31845/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31845/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31845/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31845/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31845/\n" +Oct 19 17:17:16.527: INFO: stdout: "\naffinity-nodeport-transition-s6smm\naffinity-nodeport-transition-2pbg6\naffinity-nodeport-transition-s6smm\naffinity-nodeport-transition-s6smm\naffinity-nodeport-transition-2pbg6\naffinity-nodeport-transition-s6smm\naffinity-nodeport-transition-2pbg6\naffinity-nodeport-transition-2pbg6\naffinity-nodeport-transition-s6smm\naffinity-nodeport-transition-s6smm\naffinity-nodeport-transition-5jwb7\naffinity-nodeport-transition-5jwb7\naffinity-nodeport-transition-2pbg6\naffinity-nodeport-transition-5jwb7\naffinity-nodeport-transition-2pbg6\naffinity-nodeport-transition-2pbg6" +Oct 19 17:17:16.527: INFO: Received response from host: affinity-nodeport-transition-s6smm +Oct 19 17:17:16.527: INFO: Received response from host: affinity-nodeport-transition-2pbg6 +Oct 19 17:17:16.527: INFO: Received response from host: affinity-nodeport-transition-s6smm +Oct 19 17:17:16.527: INFO: Received response from host: affinity-nodeport-transition-s6smm +Oct 19 17:17:16.527: INFO: Received response from host: affinity-nodeport-transition-2pbg6 +Oct 19 17:17:16.527: INFO: Received response from host: affinity-nodeport-transition-s6smm +Oct 19 17:17:16.527: INFO: Received response from host: affinity-nodeport-transition-2pbg6 +Oct 19 17:17:16.527: INFO: Received response from host: affinity-nodeport-transition-2pbg6 +Oct 19 17:17:16.527: INFO: Received response from host: affinity-nodeport-transition-s6smm +Oct 19 17:17:16.527: INFO: Received response from host: affinity-nodeport-transition-s6smm +Oct 19 17:17:16.527: INFO: Received response from host: affinity-nodeport-transition-5jwb7 +Oct 19 17:17:16.527: INFO: Received response from host: affinity-nodeport-transition-5jwb7 +Oct 19 17:17:16.527: INFO: Received response from host: affinity-nodeport-transition-2pbg6 +Oct 19 17:17:16.527: INFO: Received response from host: affinity-nodeport-transition-5jwb7 +Oct 19 17:17:16.527: INFO: Received response from host: affinity-nodeport-transition-2pbg6 +Oct 19 17:17:16.527: INFO: Received response from host: affinity-nodeport-transition-2pbg6 +Oct 19 17:17:46.528: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-1088 exec execpod-affinityw9px5 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.250.1.123:31845/ ; done' +Oct 19 17:17:46.762: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31845/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31845/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31845/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31845/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31845/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31845/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31845/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31845/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31845/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31845/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31845/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31845/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31845/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31845/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31845/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.250.1.123:31845/\n" +Oct 19 17:17:46.762: INFO: stdout: "\naffinity-nodeport-transition-s6smm\naffinity-nodeport-transition-s6smm\naffinity-nodeport-transition-s6smm\naffinity-nodeport-transition-s6smm\naffinity-nodeport-transition-s6smm\naffinity-nodeport-transition-s6smm\naffinity-nodeport-transition-s6smm\naffinity-nodeport-transition-s6smm\naffinity-nodeport-transition-s6smm\naffinity-nodeport-transition-s6smm\naffinity-nodeport-transition-s6smm\naffinity-nodeport-transition-s6smm\naffinity-nodeport-transition-s6smm\naffinity-nodeport-transition-s6smm\naffinity-nodeport-transition-s6smm\naffinity-nodeport-transition-s6smm" +Oct 19 17:17:46.762: INFO: Received response from host: affinity-nodeport-transition-s6smm +Oct 19 17:17:46.762: INFO: Received response from host: affinity-nodeport-transition-s6smm +Oct 19 17:17:46.762: INFO: Received response from host: affinity-nodeport-transition-s6smm +Oct 19 17:17:46.762: INFO: Received response from host: affinity-nodeport-transition-s6smm +Oct 19 17:17:46.762: INFO: Received response from host: affinity-nodeport-transition-s6smm +Oct 19 17:17:46.762: INFO: Received response from host: affinity-nodeport-transition-s6smm +Oct 19 17:17:46.762: INFO: Received response from host: affinity-nodeport-transition-s6smm +Oct 19 17:17:46.762: INFO: Received response from host: affinity-nodeport-transition-s6smm +Oct 19 17:17:46.762: INFO: Received response from host: affinity-nodeport-transition-s6smm +Oct 19 17:17:46.762: INFO: Received response from host: affinity-nodeport-transition-s6smm +Oct 19 17:17:46.762: INFO: Received response from host: affinity-nodeport-transition-s6smm +Oct 19 17:17:46.762: INFO: Received response from host: affinity-nodeport-transition-s6smm +Oct 19 17:17:46.762: INFO: Received response from host: affinity-nodeport-transition-s6smm +Oct 19 17:17:46.762: INFO: Received response from host: affinity-nodeport-transition-s6smm +Oct 19 17:17:46.762: INFO: Received response from host: affinity-nodeport-transition-s6smm +Oct 19 17:17:46.762: INFO: Received response from host: affinity-nodeport-transition-s6smm +Oct 19 17:17:46.762: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-1088, will wait for the garbage collector to delete the pods +Oct 19 17:17:46.826: INFO: Deleting ReplicationController affinity-nodeport-transition took: 3.948859ms +Oct 19 17:17:46.926: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 100.568992ms +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:17:48.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-1088" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":346,"completed":299,"skipped":5366,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-node] Security Context when creating containers with AllowPrivilegeEscalation + should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:17:48.646: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename security-context-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-test-8540 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 +[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 19 17:17:48.787: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-869c6a21-db8f-4bca-89da-43042199867a" in namespace "security-context-test-8540" to be "Succeeded or Failed" +Oct 19 17:17:48.790: INFO: Pod "alpine-nnp-false-869c6a21-db8f-4bca-89da-43042199867a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.15801ms +Oct 19 17:17:50.819: INFO: Pod "alpine-nnp-false-869c6a21-db8f-4bca-89da-43042199867a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032001276s +Oct 19 17:17:52.823: INFO: Pod "alpine-nnp-false-869c6a21-db8f-4bca-89da-43042199867a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035857253s +Oct 19 17:17:52.823: INFO: Pod "alpine-nnp-false-869c6a21-db8f-4bca-89da-43042199867a" satisfied condition "Succeeded or Failed" +[AfterEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:17:52.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "security-context-test-8540" for this suite. +•{"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":300,"skipped":5377,"failed":0} +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:17:52.879: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-3006 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name configmap-test-volume-f913fb11-a3f6-4ea2-a9ec-32a7be1a5599 +STEP: Creating a pod to test consume configMaps +Oct 19 17:17:53.026: INFO: Waiting up to 5m0s for pod "pod-configmaps-126d5969-3489-417d-8dca-653acb9835e7" in namespace "configmap-3006" to be "Succeeded or Failed" +Oct 19 17:17:53.030: INFO: Pod "pod-configmaps-126d5969-3489-417d-8dca-653acb9835e7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.514919ms +Oct 19 17:17:55.035: INFO: Pod "pod-configmaps-126d5969-3489-417d-8dca-653acb9835e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009569783s +STEP: Saw pod success +Oct 19 17:17:55.035: INFO: Pod "pod-configmaps-126d5969-3489-417d-8dca-653acb9835e7" satisfied condition "Succeeded or Failed" +Oct 19 17:17:55.039: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod pod-configmaps-126d5969-3489-417d-8dca-653acb9835e7 container configmap-volume-test: +STEP: delete the pod +Oct 19 17:17:55.094: INFO: Waiting for pod pod-configmaps-126d5969-3489-417d-8dca-653acb9835e7 to disappear +Oct 19 17:17:55.097: INFO: Pod pod-configmaps-126d5969-3489-417d-8dca-653acb9835e7 no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:17:55.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-3006" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":346,"completed":301,"skipped":5397,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook + should execute prestop http hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Container Lifecycle Hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:17:55.107: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-lifecycle-hook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-lifecycle-hook-3825 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] when create a pod with lifecycle hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 +STEP: create the container to handle the HTTPGet hook request. +Oct 19 17:17:55.257: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) +Oct 19 17:17:57.262: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) +[It] should execute prestop http hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the pod with lifecycle hook +Oct 19 17:17:57.281: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) +Oct 19 17:17:59.285: INFO: The status of Pod pod-with-prestop-http-hook is Running (Ready = true) +STEP: delete the pod with lifecycle hook +Oct 19 17:17:59.294: INFO: Waiting for pod pod-with-prestop-http-hook to disappear +Oct 19 17:17:59.297: INFO: Pod pod-with-prestop-http-hook still exists +Oct 19 17:18:01.297: INFO: Waiting for pod pod-with-prestop-http-hook to disappear +Oct 19 17:18:01.300: INFO: Pod pod-with-prestop-http-hook no longer exists +STEP: check prestop hook +[AfterEach] [sig-node] Container Lifecycle Hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:18:01.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-lifecycle-hook-3825" for this suite. +•{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":346,"completed":302,"skipped":5428,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should be able to change the type from ExternalName to NodePort [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:18:01.316: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-6100 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should be able to change the type from ExternalName to NodePort [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a service externalname-service with the type=ExternalName in namespace services-6100 +STEP: changing the ExternalName service to type=NodePort +STEP: creating replication controller externalname-service in namespace services-6100 +I1019 17:18:01.470094 4339 runners.go:190] Created replication controller with name: externalname-service, namespace: services-6100, replica count: 2 +I1019 17:18:04.521005 4339 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Oct 19 17:18:04.521: INFO: Creating new exec pod +Oct 19 17:18:07.541: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-6100 exec execpod42qpw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' +Oct 19 17:18:07.758: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" +Oct 19 17:18:07.758: INFO: stdout: "externalname-service-l6ns9" +Oct 19 17:18:07.758: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-6100 exec execpod42qpw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.71.55.178 80' +Oct 19 17:18:07.981: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 100.71.55.178 80\nConnection to 100.71.55.178 80 port [tcp/http] succeeded!\n" +Oct 19 17:18:07.981: INFO: stdout: "" +Oct 19 17:18:08.981: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-6100 exec execpod42qpw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.71.55.178 80' +Oct 19 17:18:09.225: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 100.71.55.178 80\nConnection to 100.71.55.178 80 port [tcp/http] succeeded!\n" +Oct 19 17:18:09.225: INFO: stdout: "externalname-service-9bt9g" +Oct 19 17:18:09.225: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-6100 exec execpod42qpw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.250.1.123 31905' +Oct 19 17:18:09.396: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.250.1.123 31905\nConnection to 10.250.1.123 31905 port [tcp/*] succeeded!\n" +Oct 19 17:18:09.396: INFO: stdout: "externalname-service-9bt9g" +Oct 19 17:18:09.396: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=services-6100 exec execpod42qpw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.250.3.120 31905' +Oct 19 17:18:09.608: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.250.3.120 31905\nConnection to 10.250.3.120 31905 port [tcp/*] succeeded!\n" +Oct 19 17:18:09.608: INFO: stdout: "externalname-service-l6ns9" +Oct 19 17:18:09.608: INFO: Cleaning up the ExternalName to NodePort test service +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:18:09.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-6100" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":346,"completed":303,"skipped":5481,"failed":0} +SSS +------------------------------ +[sig-node] Events + should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Events + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:18:09.630: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename events +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in events-9430 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pod +STEP: submitting the pod to kubernetes +STEP: verifying the pod is in kubernetes +STEP: retrieving the pod +Oct 19 17:18:11.791: INFO: &Pod{ObjectMeta:{send-events-12a37a2f-f2b0-4e6c-9bcb-34f20a4ba3f3 events-9430 b4eaab6a-b539-46ac-bfbb-25c4db76f55a 39320 0 2021-10-19 17:18:09 +0000 UTC map[name:foo time:768951455] map[cni.projectcalico.org/podIP:100.96.0.90/32 cni.projectcalico.org/podIPs:100.96.0.90/32 kubernetes.io/psp:e2e-test-privileged-psp] [] [] [{e2e.test Update v1 2021-10-19 17:18:09 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-19 17:18:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-19 17:18:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.0.90\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-vdssz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:p,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmhay-ddd.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vdssz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 17:18:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 17:18:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 17:18:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 17:18:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.1.123,PodIP:100.96.0.90,StartTime:2021-10-19 17:18:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-19 17:18:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:containerd://e6b91a6b23cd9e3bc54dacfd43a38a1a3a6124a8939e7be53449708bd4b4d01d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.0.90,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + +STEP: checking for scheduler event about the pod +Oct 19 17:18:13.796: INFO: Saw scheduler event for our pod. +STEP: checking for kubelet event about the pod +Oct 19 17:18:15.800: INFO: Saw kubelet event for our pod. +STEP: deleting the pod +[AfterEach] [sig-node] Events + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:18:15.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "events-9430" for this suite. +•{"msg":"PASSED [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":346,"completed":304,"skipped":5484,"failed":0} +SSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide container's memory request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:18:15.814: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-52 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should provide container's memory request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 19 17:18:15.957: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b274ab88-78f9-4a09-b333-16ffd179ac09" in namespace "downward-api-52" to be "Succeeded or Failed" +Oct 19 17:18:15.962: INFO: Pod "downwardapi-volume-b274ab88-78f9-4a09-b333-16ffd179ac09": Phase="Pending", Reason="", readiness=false. Elapsed: 4.760317ms +Oct 19 17:18:17.966: INFO: Pod "downwardapi-volume-b274ab88-78f9-4a09-b333-16ffd179ac09": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008450207s +STEP: Saw pod success +Oct 19 17:18:17.966: INFO: Pod "downwardapi-volume-b274ab88-78f9-4a09-b333-16ffd179ac09" satisfied condition "Succeeded or Failed" +Oct 19 17:18:17.969: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod downwardapi-volume-b274ab88-78f9-4a09-b333-16ffd179ac09 container client-container: +STEP: delete the pod +Oct 19 17:18:17.981: INFO: Waiting for pod downwardapi-volume-b274ab88-78f9-4a09-b333-16ffd179ac09 to disappear +Oct 19 17:18:17.984: INFO: Pod downwardapi-volume-b274ab88-78f9-4a09-b333-16ffd179ac09 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:18:17.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-52" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":346,"completed":305,"skipped":5491,"failed":0} + +------------------------------ +[sig-api-machinery] Garbage collector + should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:18:17.993: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename gc +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-9902 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the rc1 +STEP: create the rc2 +STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well +STEP: delete the rc simpletest-rc-to-be-deleted +STEP: wait for the rc to be deleted +STEP: Gathering metrics +Oct 19 17:18:28.200: INFO: For apiserver_request_total: +For apiserver_request_latency_seconds: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +W1019 17:18:28.199989 4339 metrics_grabber.go:151] Can't find kube-controller-manager pod. Grabbing metrics from kube-controller-manager is disabled. +Oct 19 17:18:28.200: INFO: Deleting pod "simpletest-rc-to-be-deleted-6fh6c" in namespace "gc-9902" +Oct 19 17:18:28.217: INFO: Deleting pod "simpletest-rc-to-be-deleted-74q4m" in namespace "gc-9902" +Oct 19 17:18:28.235: INFO: Deleting pod "simpletest-rc-to-be-deleted-8jg5j" in namespace "gc-9902" +Oct 19 17:18:28.254: INFO: Deleting pod "simpletest-rc-to-be-deleted-fksfn" in namespace "gc-9902" +Oct 19 17:18:28.270: INFO: Deleting pod "simpletest-rc-to-be-deleted-kkcbs" in namespace "gc-9902" +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:18:28.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-9902" for this suite. +•{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":346,"completed":306,"skipped":5491,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir wrapper volumes + should not cause race condition when used for configmaps [Serial] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir wrapper volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:18:28.298: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir-wrapper +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-wrapper-7593 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not cause race condition when used for configmaps [Serial] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating 50 configmaps +STEP: Creating RC which spawns configmap-volume pods +Oct 19 17:18:28.703: INFO: Pod name wrapped-volume-race-fcc804c8-85ab-4556-a68f-b08db63f46dd: Found 1 pods out of 5 +Oct 19 17:18:33.713: INFO: Pod name wrapped-volume-race-fcc804c8-85ab-4556-a68f-b08db63f46dd: Found 5 pods out of 5 +STEP: Ensuring each pod is running +STEP: deleting ReplicationController wrapped-volume-race-fcc804c8-85ab-4556-a68f-b08db63f46dd in namespace emptydir-wrapper-7593, will wait for the garbage collector to delete the pods +Oct 19 17:18:33.789: INFO: Deleting ReplicationController wrapped-volume-race-fcc804c8-85ab-4556-a68f-b08db63f46dd took: 4.416104ms +Oct 19 17:18:33.890: INFO: Terminating ReplicationController wrapped-volume-race-fcc804c8-85ab-4556-a68f-b08db63f46dd pods took: 100.973757ms +STEP: Creating RC which spawns configmap-volume pods +Oct 19 17:18:35.012: INFO: Pod name wrapped-volume-race-9da21a43-6d5c-47ed-8fef-d51f639bf4f5: Found 0 pods out of 5 +Oct 19 17:18:40.023: INFO: Pod name wrapped-volume-race-9da21a43-6d5c-47ed-8fef-d51f639bf4f5: Found 5 pods out of 5 +STEP: Ensuring each pod is running +STEP: deleting ReplicationController wrapped-volume-race-9da21a43-6d5c-47ed-8fef-d51f639bf4f5 in namespace emptydir-wrapper-7593, will wait for the garbage collector to delete the pods +Oct 19 17:18:40.099: INFO: Deleting ReplicationController wrapped-volume-race-9da21a43-6d5c-47ed-8fef-d51f639bf4f5 took: 4.400867ms +Oct 19 17:18:40.200: INFO: Terminating ReplicationController wrapped-volume-race-9da21a43-6d5c-47ed-8fef-d51f639bf4f5 pods took: 101.11093ms +STEP: Creating RC which spawns configmap-volume pods +Oct 19 17:18:41.615: INFO: Pod name wrapped-volume-race-567b4189-de15-4f93-bc75-74834fe0ed07: Found 0 pods out of 5 +Oct 19 17:18:46.626: INFO: Pod name wrapped-volume-race-567b4189-de15-4f93-bc75-74834fe0ed07: Found 5 pods out of 5 +STEP: Ensuring each pod is running +STEP: deleting ReplicationController wrapped-volume-race-567b4189-de15-4f93-bc75-74834fe0ed07 in namespace emptydir-wrapper-7593, will wait for the garbage collector to delete the pods +Oct 19 17:18:46.702: INFO: Deleting ReplicationController wrapped-volume-race-567b4189-de15-4f93-bc75-74834fe0ed07 took: 5.582416ms +Oct 19 17:18:46.803: INFO: Terminating ReplicationController wrapped-volume-race-567b4189-de15-4f93-bc75-74834fe0ed07 pods took: 100.965532ms +STEP: Cleaning up the configMaps +[AfterEach] [sig-storage] EmptyDir wrapper volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:18:48.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-wrapper-7593" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":346,"completed":307,"skipped":5502,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Container Runtime blackbox test on terminated container + should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:18:48.087: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-runtime +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-491 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the container +STEP: wait for the container to reach Succeeded +STEP: get the container status +STEP: the container should be terminated +STEP: the termination message should be set +Oct 19 17:18:50.250: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- +STEP: delete the container +[AfterEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:18:50.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-runtime-491" for this suite. +•{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":346,"completed":308,"skipped":5530,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition + getting/updating/patching custom resource definition status sub-resource works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:18:50.267: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename custom-resource-definition +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in custom-resource-definition-621 +STEP: Waiting for a default service account to be provisioned in namespace +[It] getting/updating/patching custom resource definition status sub-resource works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 19 17:18:50.403: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:18:50.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "custom-resource-definition-621" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":346,"completed":309,"skipped":5563,"failed":0} +SSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide container's memory request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:18:50.945: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-7135 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should provide container's memory request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Oct 19 17:18:51.120: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d9d27fa5-45d6-4ccf-aefc-16cee8f06e28" in namespace "projected-7135" to be "Succeeded or Failed" +Oct 19 17:18:51.123: INFO: Pod "downwardapi-volume-d9d27fa5-45d6-4ccf-aefc-16cee8f06e28": Phase="Pending", Reason="", readiness=false. Elapsed: 3.071125ms +Oct 19 17:18:53.128: INFO: Pod "downwardapi-volume-d9d27fa5-45d6-4ccf-aefc-16cee8f06e28": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007799069s +STEP: Saw pod success +Oct 19 17:18:53.128: INFO: Pod "downwardapi-volume-d9d27fa5-45d6-4ccf-aefc-16cee8f06e28" satisfied condition "Succeeded or Failed" +Oct 19 17:18:53.131: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod downwardapi-volume-d9d27fa5-45d6-4ccf-aefc-16cee8f06e28 container client-container: +STEP: delete the pod +Oct 19 17:18:53.144: INFO: Waiting for pod downwardapi-volume-d9d27fa5-45d6-4ccf-aefc-16cee8f06e28 to disappear +Oct 19 17:18:53.147: INFO: Pod downwardapi-volume-d9d27fa5-45d6-4ccf-aefc-16cee8f06e28 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:18:53.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-7135" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":346,"completed":310,"skipped":5570,"failed":0} +SS +------------------------------ +[sig-node] Probing container + should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:18:53.156: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-probe +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-5352 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 +[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating pod busybox-34b23671-7bf5-471d-b6b1-59aa09bfac68 in namespace container-probe-5352 +Oct 19 17:18:55.308: INFO: Started pod busybox-34b23671-7bf5-471d-b6b1-59aa09bfac68 in namespace container-probe-5352 +STEP: checking the pod's current state and verifying that restartCount is present +Oct 19 17:18:55.311: INFO: Initial restart count of pod busybox-34b23671-7bf5-471d-b6b1-59aa09bfac68 is 0 +Oct 19 17:19:45.470: INFO: Restart count of pod container-probe-5352/busybox-34b23671-7bf5-471d-b6b1-59aa09bfac68 is now 1 (50.158731369s elapsed) +STEP: deleting the pod +[AfterEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:19:45.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-5352" for this suite. +•{"msg":"PASSED [sig-node] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":346,"completed":311,"skipped":5572,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-network] Ingress API + should support creating Ingress API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Ingress API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:19:45.486: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename ingress +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in ingress-9351 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support creating Ingress API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: getting /apis +STEP: getting /apis/networking.k8s.io +STEP: getting /apis/networking.k8s.iov1 +STEP: creating +STEP: getting +STEP: listing +STEP: watching +Oct 19 17:19:45.654: INFO: starting watch +STEP: cluster-wide listing +STEP: cluster-wide watching +Oct 19 17:19:45.661: INFO: starting watch +STEP: patching +STEP: updating +Oct 19 17:19:45.673: INFO: waiting for watch events with expected annotations +Oct 19 17:19:45.673: INFO: saw patched and updated annotations +STEP: patching /status +STEP: updating /status +STEP: get /status +STEP: deleting +STEP: deleting a collection +[AfterEach] [sig-network] Ingress API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:19:45.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "ingress-9351" for this suite. +•{"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":346,"completed":312,"skipped":5582,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl patch + should add annotations for pods in rc [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:19:45.717: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-5929 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should add annotations for pods in rc [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating Agnhost RC +Oct 19 17:19:45.890: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-5929 create -f -' +Oct 19 17:19:46.030: INFO: stderr: "" +Oct 19 17:19:46.030: INFO: stdout: "replicationcontroller/agnhost-primary created\n" +STEP: Waiting for Agnhost primary to start. +Oct 19 17:19:47.034: INFO: Selector matched 1 pods for map[app:agnhost] +Oct 19 17:19:47.034: INFO: Found 0 / 1 +Oct 19 17:19:48.035: INFO: Selector matched 1 pods for map[app:agnhost] +Oct 19 17:19:48.035: INFO: Found 1 / 1 +Oct 19 17:19:48.035: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 +STEP: patching all pods +Oct 19 17:19:48.038: INFO: Selector matched 1 pods for map[app:agnhost] +Oct 19 17:19:48.038: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +Oct 19 17:19:48.038: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-5929 patch pod agnhost-primary-gxbvh -p {"metadata":{"annotations":{"x":"y"}}}' +Oct 19 17:19:48.090: INFO: stderr: "" +Oct 19 17:19:48.090: INFO: stdout: "pod/agnhost-primary-gxbvh patched\n" +STEP: checking annotations +Oct 19 17:19:48.094: INFO: Selector matched 1 pods for map[app:agnhost] +Oct 19 17:19:48.094: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:19:48.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-5929" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":346,"completed":313,"skipped":5595,"failed":0} + +------------------------------ +[sig-node] Downward API + should provide pod UID as env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:19:48.106: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-3384 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide pod UID as env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward api env vars +Oct 19 17:19:48.358: INFO: Waiting up to 5m0s for pod "downward-api-632bb7a8-0899-4701-b646-a937207f8b06" in namespace "downward-api-3384" to be "Succeeded or Failed" +Oct 19 17:19:48.361: INFO: Pod "downward-api-632bb7a8-0899-4701-b646-a937207f8b06": Phase="Pending", Reason="", readiness=false. Elapsed: 3.165029ms +Oct 19 17:19:50.367: INFO: Pod "downward-api-632bb7a8-0899-4701-b646-a937207f8b06": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008693859s +STEP: Saw pod success +Oct 19 17:19:50.367: INFO: Pod "downward-api-632bb7a8-0899-4701-b646-a937207f8b06" satisfied condition "Succeeded or Failed" +Oct 19 17:19:50.370: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod downward-api-632bb7a8-0899-4701-b646-a937207f8b06 container dapi-container: +STEP: delete the pod +Oct 19 17:19:50.384: INFO: Waiting for pod downward-api-632bb7a8-0899-4701-b646-a937207f8b06 to disappear +Oct 19 17:19:50.387: INFO: Pod downward-api-632bb7a8-0899-4701-b646-a937207f8b06 no longer exists +[AfterEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:19:50.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-3384" for this suite. +•{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":346,"completed":314,"skipped":5595,"failed":0} +S +------------------------------ +[sig-node] Probing container + with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:19:50.396: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-probe +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-1792 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 +[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 19 17:19:50.548: INFO: The status of Pod test-webserver-53f06641-09e3-444d-9d21-017e5c30a532 is Pending, waiting for it to be Running (with Ready = true) +Oct 19 17:19:52.552: INFO: The status of Pod test-webserver-53f06641-09e3-444d-9d21-017e5c30a532 is Running (Ready = false) +Oct 19 17:19:54.552: INFO: The status of Pod test-webserver-53f06641-09e3-444d-9d21-017e5c30a532 is Running (Ready = false) +Oct 19 17:19:56.554: INFO: The status of Pod test-webserver-53f06641-09e3-444d-9d21-017e5c30a532 is Running (Ready = false) +Oct 19 17:19:58.553: INFO: The status of Pod test-webserver-53f06641-09e3-444d-9d21-017e5c30a532 is Running (Ready = false) +Oct 19 17:20:00.554: INFO: The status of Pod test-webserver-53f06641-09e3-444d-9d21-017e5c30a532 is Running (Ready = false) +Oct 19 17:20:02.552: INFO: The status of Pod test-webserver-53f06641-09e3-444d-9d21-017e5c30a532 is Running (Ready = false) +Oct 19 17:20:04.552: INFO: The status of Pod test-webserver-53f06641-09e3-444d-9d21-017e5c30a532 is Running (Ready = false) +Oct 19 17:20:06.552: INFO: The status of Pod test-webserver-53f06641-09e3-444d-9d21-017e5c30a532 is Running (Ready = false) +Oct 19 17:20:08.552: INFO: The status of Pod test-webserver-53f06641-09e3-444d-9d21-017e5c30a532 is Running (Ready = false) +Oct 19 17:20:10.563: INFO: The status of Pod test-webserver-53f06641-09e3-444d-9d21-017e5c30a532 is Running (Ready = false) +Oct 19 17:20:12.552: INFO: The status of Pod test-webserver-53f06641-09e3-444d-9d21-017e5c30a532 is Running (Ready = true) +Oct 19 17:20:12.555: INFO: Container started at 2021-10-19 17:19:51 +0000 UTC, pod became ready at 2021-10-19 17:20:10 +0000 UTC +[AfterEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:20:12.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-1792" for this suite. +•{"msg":"PASSED [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":346,"completed":315,"skipped":5596,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:20:12.565: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-9441 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating projection with secret that has name projected-secret-test-8e198a7f-f737-450f-b437-70bd991a1219 +STEP: Creating a pod to test consume secrets +Oct 19 17:20:12.715: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c5627b2a-ab49-4cc8-b50f-a330ddf71e98" in namespace "projected-9441" to be "Succeeded or Failed" +Oct 19 17:20:12.719: INFO: Pod "pod-projected-secrets-c5627b2a-ab49-4cc8-b50f-a330ddf71e98": Phase="Pending", Reason="", readiness=false. Elapsed: 3.530033ms +Oct 19 17:20:14.722: INFO: Pod "pod-projected-secrets-c5627b2a-ab49-4cc8-b50f-a330ddf71e98": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007180228s +STEP: Saw pod success +Oct 19 17:20:14.722: INFO: Pod "pod-projected-secrets-c5627b2a-ab49-4cc8-b50f-a330ddf71e98" satisfied condition "Succeeded or Failed" +Oct 19 17:20:14.726: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod pod-projected-secrets-c5627b2a-ab49-4cc8-b50f-a330ddf71e98 container projected-secret-volume-test: +STEP: delete the pod +Oct 19 17:20:14.739: INFO: Waiting for pod pod-projected-secrets-c5627b2a-ab49-4cc8-b50f-a330ddf71e98 to disappear +Oct 19 17:20:14.742: INFO: Pod pod-projected-secrets-c5627b2a-ab49-4cc8-b50f-a330ddf71e98 no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:20:14.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-9441" for this suite. +•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":316,"skipped":5630,"failed":0} +SSSSSS +------------------------------ +[sig-network] DNS + should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:20:14.751: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename dns +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-4722 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a test headless service +STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4722 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4722;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4722 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4722;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4722.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4722.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4722.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4722.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4722.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-4722.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4722.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-4722.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4722.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-4722.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4722.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-4722.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4722.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 118.112.64.100.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/100.64.112.118_udp@PTR;check="$$(dig +tcp +noall +answer +search 118.112.64.100.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/100.64.112.118_tcp@PTR;sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4722 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4722;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4722 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4722;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4722.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4722.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4722.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4722.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4722.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-4722.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4722.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-4722.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4722.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-4722.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4722.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-4722.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4722.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 118.112.64.100.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/100.64.112.118_udp@PTR;check="$$(dig +tcp +noall +answer +search 118.112.64.100.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/100.64.112.118_tcp@PTR;sleep 1; done + +STEP: creating a pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Oct 19 17:20:16.962: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d: the server could not find the requested resource (get pods dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d) +Oct 19 17:20:17.051: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d: the server could not find the requested resource (get pods dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d) +Oct 19 17:20:17.058: INFO: Unable to read wheezy_udp@dns-test-service.dns-4722 from pod dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d: the server could not find the requested resource (get pods dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d) +Oct 19 17:20:17.062: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4722 from pod dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d: the server could not find the requested resource (get pods dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d) +Oct 19 17:20:17.066: INFO: Unable to read wheezy_udp@dns-test-service.dns-4722.svc from pod dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d: the server could not find the requested resource (get pods dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d) +Oct 19 17:20:17.071: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4722.svc from pod dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d: the server could not find the requested resource (get pods dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d) +Oct 19 17:20:17.111: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d: the server could not find the requested resource (get pods dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d) +Oct 19 17:20:17.115: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d: the server could not find the requested resource (get pods dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d) +Oct 19 17:20:17.120: INFO: Unable to read jessie_udp@dns-test-service.dns-4722 from pod dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d: the server could not find the requested resource (get pods dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d) +Oct 19 17:20:17.124: INFO: Unable to read jessie_tcp@dns-test-service.dns-4722 from pod dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d: the server could not find the requested resource (get pods dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d) +Oct 19 17:20:17.128: INFO: Unable to read jessie_udp@dns-test-service.dns-4722.svc from pod dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d: the server could not find the requested resource (get pods dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d) +Oct 19 17:20:17.133: INFO: Unable to read jessie_tcp@dns-test-service.dns-4722.svc from pod dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d: the server could not find the requested resource (get pods dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d) +Oct 19 17:20:17.168: INFO: Lookups using dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4722 wheezy_tcp@dns-test-service.dns-4722 wheezy_udp@dns-test-service.dns-4722.svc wheezy_tcp@dns-test-service.dns-4722.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4722 jessie_tcp@dns-test-service.dns-4722 jessie_udp@dns-test-service.dns-4722.svc jessie_tcp@dns-test-service.dns-4722.svc] + +Oct 19 17:20:22.174: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d: the server could not find the requested resource (get pods dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d) +Oct 19 17:20:22.179: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d: the server could not find the requested resource (get pods dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d) +Oct 19 17:20:22.183: INFO: Unable to read wheezy_udp@dns-test-service.dns-4722 from pod dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d: the server could not find the requested resource (get pods dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d) +Oct 19 17:20:22.227: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4722 from pod dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d: the server could not find the requested resource (get pods dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d) +Oct 19 17:20:22.233: INFO: Unable to read wheezy_udp@dns-test-service.dns-4722.svc from pod dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d: the server could not find the requested resource (get pods dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d) +Oct 19 17:20:22.247: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4722.svc from pod dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d: the server could not find the requested resource (get pods dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d) +Oct 19 17:20:22.285: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d: the server could not find the requested resource (get pods dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d) +Oct 19 17:20:22.289: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d: the server could not find the requested resource (get pods dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d) +Oct 19 17:20:22.293: INFO: Unable to read jessie_udp@dns-test-service.dns-4722 from pod dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d: the server could not find the requested resource (get pods dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d) +Oct 19 17:20:22.298: INFO: Unable to read jessie_tcp@dns-test-service.dns-4722 from pod dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d: the server could not find the requested resource (get pods dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d) +Oct 19 17:20:22.302: INFO: Unable to read jessie_udp@dns-test-service.dns-4722.svc from pod dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d: the server could not find the requested resource (get pods dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d) +Oct 19 17:20:22.306: INFO: Unable to read jessie_tcp@dns-test-service.dns-4722.svc from pod dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d: the server could not find the requested resource (get pods dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d) +Oct 19 17:20:22.342: INFO: Lookups using dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4722 wheezy_tcp@dns-test-service.dns-4722 wheezy_udp@dns-test-service.dns-4722.svc wheezy_tcp@dns-test-service.dns-4722.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4722 jessie_tcp@dns-test-service.dns-4722 jessie_udp@dns-test-service.dns-4722.svc jessie_tcp@dns-test-service.dns-4722.svc] + +Oct 19 17:20:27.173: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d: the server could not find the requested resource (get pods dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d) +Oct 19 17:20:27.178: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d: the server could not find the requested resource (get pods dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d) +Oct 19 17:20:27.223: INFO: Unable to read wheezy_udp@dns-test-service.dns-4722 from pod dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d: the server could not find the requested resource (get pods dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d) +Oct 19 17:20:27.228: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4722 from pod dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d: the server could not find the requested resource (get pods dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d) +Oct 19 17:20:27.233: INFO: Unable to read wheezy_udp@dns-test-service.dns-4722.svc from pod dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d: the server could not find the requested resource (get pods dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d) +Oct 19 17:20:27.275: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4722.svc from pod dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d: the server could not find the requested resource (get pods dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d) +Oct 19 17:20:27.320: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d: the server could not find the requested resource (get pods dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d) +Oct 19 17:20:27.324: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d: the server could not find the requested resource (get pods dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d) +Oct 19 17:20:27.328: INFO: Unable to read jessie_udp@dns-test-service.dns-4722 from pod dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d: the server could not find the requested resource (get pods dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d) +Oct 19 17:20:27.333: INFO: Unable to read jessie_tcp@dns-test-service.dns-4722 from pod dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d: the server could not find the requested resource (get pods dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d) +Oct 19 17:20:27.337: INFO: Unable to read jessie_udp@dns-test-service.dns-4722.svc from pod dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d: the server could not find the requested resource (get pods dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d) +Oct 19 17:20:27.343: INFO: Unable to read jessie_tcp@dns-test-service.dns-4722.svc from pod dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d: the server could not find the requested resource (get pods dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d) +Oct 19 17:20:27.382: INFO: Lookups using dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4722 wheezy_tcp@dns-test-service.dns-4722 wheezy_udp@dns-test-service.dns-4722.svc wheezy_tcp@dns-test-service.dns-4722.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4722 jessie_tcp@dns-test-service.dns-4722 jessie_udp@dns-test-service.dns-4722.svc jessie_tcp@dns-test-service.dns-4722.svc] + +Oct 19 17:20:32.174: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d: the server could not find the requested resource (get pods dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d) +Oct 19 17:20:32.179: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d: the server could not find the requested resource (get pods dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d) +Oct 19 17:20:32.183: INFO: Unable to read wheezy_udp@dns-test-service.dns-4722 from pod dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d: the server could not find the requested resource (get pods dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d) +Oct 19 17:20:32.187: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4722 from pod dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d: the server could not find the requested resource (get pods dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d) +Oct 19 17:20:32.192: INFO: Unable to read wheezy_udp@dns-test-service.dns-4722.svc from pod dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d: the server could not find the requested resource (get pods dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d) +Oct 19 17:20:32.196: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4722.svc from pod dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d: the server could not find the requested resource (get pods dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d) +Oct 19 17:20:32.235: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d: the server could not find the requested resource (get pods dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d) +Oct 19 17:20:32.239: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d: the server could not find the requested resource (get pods dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d) +Oct 19 17:20:32.244: INFO: Unable to read jessie_udp@dns-test-service.dns-4722 from pod dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d: the server could not find the requested resource (get pods dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d) +Oct 19 17:20:32.248: INFO: Unable to read jessie_tcp@dns-test-service.dns-4722 from pod dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d: the server could not find the requested resource (get pods dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d) +Oct 19 17:20:32.253: INFO: Unable to read jessie_udp@dns-test-service.dns-4722.svc from pod dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d: the server could not find the requested resource (get pods dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d) +Oct 19 17:20:32.257: INFO: Unable to read jessie_tcp@dns-test-service.dns-4722.svc from pod dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d: the server could not find the requested resource (get pods dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d) +Oct 19 17:20:32.291: INFO: Lookups using dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4722 wheezy_tcp@dns-test-service.dns-4722 wheezy_udp@dns-test-service.dns-4722.svc wheezy_tcp@dns-test-service.dns-4722.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4722 jessie_tcp@dns-test-service.dns-4722 jessie_udp@dns-test-service.dns-4722.svc jessie_tcp@dns-test-service.dns-4722.svc] + +Oct 19 17:20:37.175: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d: the server could not find the requested resource (get pods dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d) +Oct 19 17:20:37.179: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d: the server could not find the requested resource (get pods dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d) +Oct 19 17:20:37.226: INFO: Unable to read wheezy_udp@dns-test-service.dns-4722 from pod dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d: the server could not find the requested resource (get pods dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d) +Oct 19 17:20:37.233: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4722 from pod dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d: the server could not find the requested resource (get pods dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d) +Oct 19 17:20:37.240: INFO: Unable to read wheezy_udp@dns-test-service.dns-4722.svc from pod dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d: the server could not find the requested resource (get pods dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d) +Oct 19 17:20:37.248: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4722.svc from pod dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d: the server could not find the requested resource (get pods dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d) +Oct 19 17:20:37.296: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d: the server could not find the requested resource (get pods dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d) +Oct 19 17:20:37.300: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d: the server could not find the requested resource (get pods dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d) +Oct 19 17:20:37.305: INFO: Unable to read jessie_udp@dns-test-service.dns-4722 from pod dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d: the server could not find the requested resource (get pods dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d) +Oct 19 17:20:37.310: INFO: Unable to read jessie_tcp@dns-test-service.dns-4722 from pod dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d: the server could not find the requested resource (get pods dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d) +Oct 19 17:20:37.315: INFO: Unable to read jessie_udp@dns-test-service.dns-4722.svc from pod dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d: the server could not find the requested resource (get pods dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d) +Oct 19 17:20:37.320: INFO: Unable to read jessie_tcp@dns-test-service.dns-4722.svc from pod dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d: the server could not find the requested resource (get pods dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d) +Oct 19 17:20:37.361: INFO: Lookups using dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4722 wheezy_tcp@dns-test-service.dns-4722 wheezy_udp@dns-test-service.dns-4722.svc wheezy_tcp@dns-test-service.dns-4722.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4722 jessie_tcp@dns-test-service.dns-4722 jessie_udp@dns-test-service.dns-4722.svc jessie_tcp@dns-test-service.dns-4722.svc] + +Oct 19 17:20:42.174: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d: the server could not find the requested resource (get pods dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d) +Oct 19 17:20:42.180: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d: the server could not find the requested resource (get pods dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d) +Oct 19 17:20:42.223: INFO: Unable to read wheezy_udp@dns-test-service.dns-4722 from pod dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d: the server could not find the requested resource (get pods dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d) +Oct 19 17:20:42.228: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4722 from pod dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d: the server could not find the requested resource (get pods dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d) +Oct 19 17:20:42.233: INFO: Unable to read wheezy_udp@dns-test-service.dns-4722.svc from pod dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d: the server could not find the requested resource (get pods dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d) +Oct 19 17:20:42.275: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4722.svc from pod dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d: the server could not find the requested resource (get pods dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d) +Oct 19 17:20:42.315: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d: the server could not find the requested resource (get pods dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d) +Oct 19 17:20:42.319: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d: the server could not find the requested resource (get pods dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d) +Oct 19 17:20:42.324: INFO: Unable to read jessie_udp@dns-test-service.dns-4722 from pod dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d: the server could not find the requested resource (get pods dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d) +Oct 19 17:20:42.328: INFO: Unable to read jessie_tcp@dns-test-service.dns-4722 from pod dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d: the server could not find the requested resource (get pods dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d) +Oct 19 17:20:42.332: INFO: Unable to read jessie_udp@dns-test-service.dns-4722.svc from pod dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d: the server could not find the requested resource (get pods dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d) +Oct 19 17:20:42.337: INFO: Unable to read jessie_tcp@dns-test-service.dns-4722.svc from pod dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d: the server could not find the requested resource (get pods dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d) +Oct 19 17:20:42.371: INFO: Lookups using dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4722 wheezy_tcp@dns-test-service.dns-4722 wheezy_udp@dns-test-service.dns-4722.svc wheezy_tcp@dns-test-service.dns-4722.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4722 jessie_tcp@dns-test-service.dns-4722 jessie_udp@dns-test-service.dns-4722.svc jessie_tcp@dns-test-service.dns-4722.svc] + +Oct 19 17:20:47.356: INFO: DNS probes using dns-4722/dns-test-e6b860ea-1dae-4209-941c-860795ef3e6d succeeded + +STEP: deleting the pod +STEP: deleting the test service +STEP: deleting the test headless service +[AfterEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:20:47.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-4722" for this suite. +•{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":346,"completed":317,"skipped":5636,"failed":0} + +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:20:47.391: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-3836 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating projection with secret that has name projected-secret-test-3073ef71-cfad-48e4-ace1-1c7c3e47d000 +STEP: Creating a pod to test consume secrets +Oct 19 17:20:47.538: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3e8d6a15-49e1-40bf-81e9-c2ed8a7fd338" in namespace "projected-3836" to be "Succeeded or Failed" +Oct 19 17:20:47.542: INFO: Pod "pod-projected-secrets-3e8d6a15-49e1-40bf-81e9-c2ed8a7fd338": Phase="Pending", Reason="", readiness=false. Elapsed: 4.303708ms +Oct 19 17:20:49.546: INFO: Pod "pod-projected-secrets-3e8d6a15-49e1-40bf-81e9-c2ed8a7fd338": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007831945s +STEP: Saw pod success +Oct 19 17:20:49.546: INFO: Pod "pod-projected-secrets-3e8d6a15-49e1-40bf-81e9-c2ed8a7fd338" satisfied condition "Succeeded or Failed" +Oct 19 17:20:49.548: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod pod-projected-secrets-3e8d6a15-49e1-40bf-81e9-c2ed8a7fd338 container projected-secret-volume-test: +STEP: delete the pod +Oct 19 17:20:49.602: INFO: Waiting for pod pod-projected-secrets-3e8d6a15-49e1-40bf-81e9-c2ed8a7fd338 to disappear +Oct 19 17:20:49.605: INFO: Pod pod-projected-secrets-3e8d6a15-49e1-40bf-81e9-c2ed8a7fd338 no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:20:49.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-3836" for this suite. +•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":346,"completed":318,"skipped":5636,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should be able to deny custom resource creation, update and deletion [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:20:49.613: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-6491 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Oct 19 17:20:50.059: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Oct 19 17:20:53.079: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should be able to deny custom resource creation, update and deletion [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 19 17:20:53.082: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Registering the custom resource webhook via the AdmissionRegistration API +STEP: Creating a custom resource that should be denied by the webhook +STEP: Creating a custom resource whose deletion would be denied by the webhook +STEP: Updating the custom resource with disallowed data should be denied +STEP: Deleting the custom resource should be denied +STEP: Remove the offending key and value from the custom resource data +STEP: Deleting the updated custom resource should be successful +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:20:56.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-6491" for this suite. +STEP: Destroying namespace "webhook-6491-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":346,"completed":319,"skipped":5686,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Job + should adopt matching orphans and release non-matching pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Job + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:20:56.405: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename job +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in job-1485 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should adopt matching orphans and release non-matching pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a job +STEP: Ensuring active pods == parallelism +STEP: Orphaning one of the Job's Pods +Oct 19 17:20:59.064: INFO: Successfully updated pod "adopt-release--1-6wrfl" +STEP: Checking that the Job readopts the Pod +Oct 19 17:20:59.064: INFO: Waiting up to 15m0s for pod "adopt-release--1-6wrfl" in namespace "job-1485" to be "adopted" +Oct 19 17:20:59.068: INFO: Pod "adopt-release--1-6wrfl": Phase="Running", Reason="", readiness=true. Elapsed: 4.580055ms +Oct 19 17:21:01.073: INFO: Pod "adopt-release--1-6wrfl": Phase="Running", Reason="", readiness=true. Elapsed: 2.009459331s +Oct 19 17:21:01.073: INFO: Pod "adopt-release--1-6wrfl" satisfied condition "adopted" +STEP: Removing the labels from the Job's Pod +Oct 19 17:21:01.583: INFO: Successfully updated pod "adopt-release--1-6wrfl" +STEP: Checking that the Job releases the Pod +Oct 19 17:21:01.583: INFO: Waiting up to 15m0s for pod "adopt-release--1-6wrfl" in namespace "job-1485" to be "released" +Oct 19 17:21:01.585: INFO: Pod "adopt-release--1-6wrfl": Phase="Running", Reason="", readiness=true. Elapsed: 2.773333ms +Oct 19 17:21:03.590: INFO: Pod "adopt-release--1-6wrfl": Phase="Running", Reason="", readiness=true. Elapsed: 2.007320865s +Oct 19 17:21:03.590: INFO: Pod "adopt-release--1-6wrfl" satisfied condition "released" +[AfterEach] [sig-apps] Job + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:21:03.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "job-1485" for this suite. +•{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":346,"completed":320,"skipped":5724,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-auth] ServiceAccounts + should guarantee kube-root-ca.crt exist in any namespace [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:21:03.600: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename svcaccounts +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in svcaccounts-9760 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should guarantee kube-root-ca.crt exist in any namespace [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 19 17:21:03.738: INFO: Got root ca configmap in namespace "svcaccounts-9760" +Oct 19 17:21:03.742: INFO: Deleted root ca configmap in namespace "svcaccounts-9760" +STEP: waiting for a new root ca configmap created +Oct 19 17:21:04.246: INFO: Recreated root ca configmap in namespace "svcaccounts-9760" +Oct 19 17:21:04.249: INFO: Updated root ca configmap in namespace "svcaccounts-9760" +STEP: waiting for the root ca configmap reconciled +Oct 19 17:21:04.754: INFO: Reconciled root ca configmap in namespace "svcaccounts-9760" +[AfterEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:21:04.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svcaccounts-9760" for this suite. +•{"msg":"PASSED [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]","total":346,"completed":321,"skipped":5750,"failed":0} +SSSSSSSSS +------------------------------ +[sig-api-machinery] Namespaces [Serial] + should ensure that all services are removed when a namespace is deleted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:21:04.764: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename namespaces +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in namespaces-9586 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should ensure that all services are removed when a namespace is deleted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a test namespace +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nsdeletetest-1217 +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Creating a service in the namespace +STEP: Deleting the namespace +STEP: Waiting for the namespace to be removed. +STEP: Recreating the namespace +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nsdeletetest-3282 +STEP: Verifying there is no service in the namespace +[AfterEach] [sig-api-machinery] Namespaces [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:21:11.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "namespaces-9586" for this suite. +STEP: Destroying namespace "nsdeletetest-1217" for this suite. +Oct 19 17:21:11.196: INFO: Namespace nsdeletetest-1217 was already deleted +STEP: Destroying namespace "nsdeletetest-3282" for this suite. +•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":346,"completed":322,"skipped":5759,"failed":0} +SSSSSS +------------------------------ +[sig-node] Pods + should be submitted and removed [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:21:11.201: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename pods +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-4954 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:188 +[It] should be submitted and removed [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pod +STEP: setting up watch +STEP: submitting the pod to kubernetes +Oct 19 17:21:11.361: INFO: observed the pod list +STEP: verifying the pod is in kubernetes +STEP: verifying pod creation was observed +STEP: deleting the pod gracefully +STEP: verifying pod deletion was observed +[AfterEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:21:15.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-4954" for this suite. +•{"msg":"PASSED [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]","total":346,"completed":323,"skipped":5765,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicaSet + should adopt matching pods on creation and release no longer matching pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:21:15.967: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename replicaset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replicaset-7210 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should adopt matching pods on creation and release no longer matching pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Given a Pod with a 'name' label pod-adoption-release is created +Oct 19 17:21:16.112: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) +Oct 19 17:21:18.117: INFO: The status of Pod pod-adoption-release is Running (Ready = true) +STEP: When a replicaset with a matching selector is created +STEP: Then the orphan pod is adopted +STEP: When the matched label of one of its pods change +Oct 19 17:21:19.135: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 +STEP: Then the pod is released +[AfterEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:21:20.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replicaset-7210" for this suite. +•{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":346,"completed":324,"skipped":5814,"failed":0} +SSS +------------------------------ +[sig-apps] Deployment + RollingUpdateDeployment should delete old pods and create new ones [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:21:20.160: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename deployment +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in deployment-6254 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 +[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 19 17:21:20.296: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) +Oct 19 17:21:20.304: INFO: Pod name sample-pod: Found 0 pods out of 1 +Oct 19 17:21:25.474: INFO: Pod name sample-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running +Oct 19 17:21:25.474: INFO: Creating deployment "test-rolling-update-deployment" +Oct 19 17:21:25.479: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has +Oct 19 17:21:25.572: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created +Oct 19 17:21:27.579: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected +Oct 19 17:21:27.582: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 +Oct 19 17:21:27.591: INFO: Deployment "test-rolling-update-deployment": +&Deployment{ObjectMeta:{test-rolling-update-deployment deployment-6254 52ad064e-fccf-4c16-b9ca-5a925d6a41d1 41259 1 2021-10-19 17:21:25 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2021-10-19 17:21:25 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-19 17:21:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00616ee08 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-10-19 17:21:25 +0000 UTC,LastTransitionTime:2021-10-19 17:21:25 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-585b757574" has successfully progressed.,LastUpdateTime:2021-10-19 17:21:26 +0000 UTC,LastTransitionTime:2021-10-19 17:21:25 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} + +Oct 19 17:21:27.594: INFO: New ReplicaSet "test-rolling-update-deployment-585b757574" of Deployment "test-rolling-update-deployment": +&ReplicaSet{ObjectMeta:{test-rolling-update-deployment-585b757574 deployment-6254 29f16186-b9ce-4bef-9aac-48a75fe696d5 41252 1 2021-10-19 17:21:25 +0000 UTC map[name:sample-pod pod-template-hash:585b757574] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 52ad064e-fccf-4c16-b9ca-5a925d6a41d1 0xc00616f2f7 0xc00616f2f8}] [] [{kube-controller-manager Update apps/v1 2021-10-19 17:21:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"52ad064e-fccf-4c16-b9ca-5a925d6a41d1\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-19 17:21:26 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 585b757574,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:585b757574] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00616f3a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} +Oct 19 17:21:27.594: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": +Oct 19 17:21:27.594: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-6254 3495bcab-9eaa-4c60-8648-3e69ddf0bfea 41258 2 2021-10-19 17:21:20 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 52ad064e-fccf-4c16-b9ca-5a925d6a41d1 0xc00616f1c7 0xc00616f1c8}] [] [{e2e.test Update apps/v1 2021-10-19 17:21:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-19 17:21:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"52ad064e-fccf-4c16-b9ca-5a925d6a41d1\"}":{}}},"f:spec":{"f:replicas":{}}} } {kube-controller-manager Update apps/v1 2021-10-19 17:21:26 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00616f288 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Oct 19 17:21:27.597: INFO: Pod "test-rolling-update-deployment-585b757574-t7w7t" is available: +&Pod{ObjectMeta:{test-rolling-update-deployment-585b757574-t7w7t test-rolling-update-deployment-585b757574- deployment-6254 c91d5788-2f91-490a-af60-18af3d3da3bc 41251 0 2021-10-19 17:21:25 +0000 UTC map[name:sample-pod pod-template-hash:585b757574] map[cni.projectcalico.org/podIP:100.96.0.125/32 cni.projectcalico.org/podIPs:100.96.0.125/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet test-rolling-update-deployment-585b757574 29f16186-b9ce-4bef-9aac-48a75fe696d5 0xc00616f807 0xc00616f808}] [] [{kube-controller-manager Update v1 2021-10-19 17:21:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"29f16186-b9ce-4bef-9aac-48a75fe696d5\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2021-10-19 17:21:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2021-10-19 17:21:26 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.0.125\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-q6mcr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmhay-ddd.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q6mcr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 17:21:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 17:21:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 17:21:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 17:21:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.1.123,PodIP:100.96.0.125,StartTime:2021-10-19 17:21:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-19 17:21:26 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:containerd://fccf4cde7e374dce030e38de8aa0bf9ba78c2f47e18250b2692f2dedb6bd816a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.0.125,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:21:27.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-6254" for this suite. +•{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":346,"completed":325,"skipped":5817,"failed":0} +SSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:21:27.605: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-450 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name configmap-test-volume-map-e0765b68-ddad-4ebe-8033-4c8789e89c5b +STEP: Creating a pod to test consume configMaps +Oct 19 17:21:27.751: INFO: Waiting up to 5m0s for pod "pod-configmaps-1dcecc23-5a95-43f2-8c5a-1032ad3e8313" in namespace "configmap-450" to be "Succeeded or Failed" +Oct 19 17:21:27.754: INFO: Pod "pod-configmaps-1dcecc23-5a95-43f2-8c5a-1032ad3e8313": Phase="Pending", Reason="", readiness=false. Elapsed: 3.553732ms +Oct 19 17:21:29.759: INFO: Pod "pod-configmaps-1dcecc23-5a95-43f2-8c5a-1032ad3e8313": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00870257s +STEP: Saw pod success +Oct 19 17:21:29.759: INFO: Pod "pod-configmaps-1dcecc23-5a95-43f2-8c5a-1032ad3e8313" satisfied condition "Succeeded or Failed" +Oct 19 17:21:29.762: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod pod-configmaps-1dcecc23-5a95-43f2-8c5a-1032ad3e8313 container agnhost-container: +STEP: delete the pod +Oct 19 17:21:29.822: INFO: Waiting for pod pod-configmaps-1dcecc23-5a95-43f2-8c5a-1032ad3e8313 to disappear +Oct 19 17:21:29.825: INFO: Pod pod-configmaps-1dcecc23-5a95-43f2-8c5a-1032ad3e8313 no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:21:29.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-450" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":326,"skipped":5822,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Container Runtime blackbox test on terminated container + should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:21:29.833: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-runtime +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-3896 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the container +STEP: wait for the container to reach Failed +STEP: get the container status +STEP: the container should be terminated +STEP: the termination message should be set +Oct 19 17:21:31.995: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- +STEP: delete the container +[AfterEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:21:32.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-runtime-3896" for this suite. +•{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":346,"completed":327,"skipped":5849,"failed":0} +SSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for multiple CRDs of different groups [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:21:32.022: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-4977 +STEP: Waiting for a default service account to be provisioned in namespace +[It] works for multiple CRDs of different groups [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation +Oct 19 17:21:32.178: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +Oct 19 17:21:35.040: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:21:46.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-4977" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":346,"completed":328,"skipped":5854,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a service. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:21:46.294: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename resourcequota +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-3708 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should create a ResourceQuota and capture the life of a service. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Counting existing ResourceQuota +STEP: Creating a ResourceQuota +STEP: Ensuring resource quota status is calculated +STEP: Creating a Service +STEP: Creating a NodePort Service +STEP: Not allowing a LoadBalancer Service with NodePort to be created that exceeds remaining quota +STEP: Ensuring resource quota status captures service creation +STEP: Deleting Services +STEP: Ensuring resource quota status released usage +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:21:57.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-3708" for this suite. +•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":346,"completed":329,"skipped":5868,"failed":0} + +------------------------------ +[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook + should execute poststart exec hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Container Lifecycle Hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:21:57.506: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename container-lifecycle-hook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-lifecycle-hook-7875 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] when create a pod with lifecycle hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 +STEP: create the container to handle the HTTPGet hook request. +Oct 19 17:21:57.651: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) +Oct 19 17:21:59.655: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) +[It] should execute poststart exec hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the pod with lifecycle hook +Oct 19 17:21:59.669: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) +Oct 19 17:22:01.673: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = true) +STEP: check poststart hook +STEP: delete the pod with lifecycle hook +Oct 19 17:22:01.686: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Oct 19 17:22:01.690: INFO: Pod pod-with-poststart-exec-hook still exists +Oct 19 17:22:03.691: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Oct 19 17:22:03.694: INFO: Pod pod-with-poststart-exec-hook still exists +Oct 19 17:22:05.691: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Oct 19 17:22:05.695: INFO: Pod pod-with-poststart-exec-hook no longer exists +[AfterEach] [sig-node] Container Lifecycle Hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:22:05.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-lifecycle-hook-7875" for this suite. +•{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":346,"completed":330,"skipped":5868,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for CRD without validation schema [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:22:05.703: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-4809 +STEP: Waiting for a default service account to be provisioned in namespace +[It] works for CRD without validation schema [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 19 17:22:05.836: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: client-side validation (kubectl create and apply) allows request with any unknown properties +Oct 19 17:22:08.701: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-4809 --namespace=crd-publish-openapi-4809 create -f -' +Oct 19 17:22:09.062: INFO: stderr: "" +Oct 19 17:22:09.062: INFO: stdout: "e2e-test-crd-publish-openapi-8929-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" +Oct 19 17:22:09.062: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-4809 --namespace=crd-publish-openapi-4809 delete e2e-test-crd-publish-openapi-8929-crds test-cr' +Oct 19 17:22:09.136: INFO: stderr: "" +Oct 19 17:22:09.136: INFO: stdout: "e2e-test-crd-publish-openapi-8929-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" +Oct 19 17:22:09.136: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-4809 --namespace=crd-publish-openapi-4809 apply -f -' +Oct 19 17:22:09.261: INFO: stderr: "" +Oct 19 17:22:09.261: INFO: stdout: "e2e-test-crd-publish-openapi-8929-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" +Oct 19 17:22:09.261: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-4809 --namespace=crd-publish-openapi-4809 delete e2e-test-crd-publish-openapi-8929-crds test-cr' +Oct 19 17:22:09.311: INFO: stderr: "" +Oct 19 17:22:09.311: INFO: stdout: "e2e-test-crd-publish-openapi-8929-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" +STEP: kubectl explain works to explain CR without validation schema +Oct 19 17:22:09.311: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-4809 explain e2e-test-crd-publish-openapi-8929-crds' +Oct 19 17:22:09.439: INFO: stderr: "" +Oct 19 17:22:09.439: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-8929-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:22:12.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-4809" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":346,"completed":331,"skipped":5891,"failed":0} + +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + should list, patch and delete a collection of StatefulSets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:22:12.812: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename statefulset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-5499 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:92 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:107 +STEP: Creating service test in namespace statefulset-5499 +[It] should list, patch and delete a collection of StatefulSets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 19 17:22:12.960: INFO: Found 0 stateful pods, waiting for 1 +Oct 19 17:22:22.965: INFO: Waiting for pod test-ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: patching the StatefulSet +Oct 19 17:22:22.983: INFO: Found 1 stateful pods, waiting for 2 +Oct 19 17:22:32.987: INFO: Waiting for pod test-ss-0 to enter Running - Ready=true, currently Running - Ready=true +Oct 19 17:22:32.987: INFO: Waiting for pod test-ss-1 to enter Running - Ready=true, currently Running - Ready=true +STEP: Listing all StatefulSets +STEP: Delete all of the StatefulSets +STEP: Verify that StatefulSets have been deleted +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118 +Oct 19 17:22:33.005: INFO: Deleting all statefulset in ns statefulset-5499 +[AfterEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:22:33.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-5499" for this suite. +•{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should list, patch and delete a collection of StatefulSets [Conformance]","total":346,"completed":332,"skipped":5891,"failed":0} + +------------------------------ +[sig-auth] Certificates API [Privileged:ClusterAdmin] + should support CSR API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:22:33.023: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename certificates +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in certificates-9266 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support CSR API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: getting /apis +STEP: getting /apis/certificates.k8s.io +STEP: getting /apis/certificates.k8s.io/v1 +STEP: creating +STEP: getting +STEP: listing +STEP: watching +Oct 19 17:22:33.656: INFO: starting watch +STEP: patching +STEP: updating +Oct 19 17:22:33.666: INFO: waiting for watch events with expected annotations +Oct 19 17:22:33.666: INFO: saw patched and updated annotations +STEP: getting /approval +STEP: patching /approval +STEP: updating /approval +STEP: getting /status +STEP: patching /status +STEP: updating /status +STEP: deleting +STEP: deleting a collection +[AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:22:33.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "certificates-9266" for this suite. +•{"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":346,"completed":333,"skipped":5891,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-node] InitContainer [NodeConformance] + should invoke init containers on a RestartNever pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:22:33.729: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename init-container +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in init-container-9445 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 +[It] should invoke init containers on a RestartNever pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pod +Oct 19 17:22:33.861: INFO: PodSpec: initContainers in spec.initContainers +[AfterEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:22:37.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "init-container-9445" for this suite. +•{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":346,"completed":334,"skipped":5907,"failed":0} +SSSS +------------------------------ +[sig-apps] Job + should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Job + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:22:37.174: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename job +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in job-9686 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a job +STEP: Ensuring job reaches completions +[AfterEach] [sig-apps] Job + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:22:41.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "job-9686" for this suite. +•{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":346,"completed":335,"skipped":5911,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:22:41.355: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-4604 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0644 on node default medium +Oct 19 17:22:41.501: INFO: Waiting up to 5m0s for pod "pod-8fb11fee-21b7-4e51-8452-81ec301e921c" in namespace "emptydir-4604" to be "Succeeded or Failed" +Oct 19 17:22:41.505: INFO: Pod "pod-8fb11fee-21b7-4e51-8452-81ec301e921c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.884127ms +Oct 19 17:22:43.509: INFO: Pod "pod-8fb11fee-21b7-4e51-8452-81ec301e921c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008271838s +STEP: Saw pod success +Oct 19 17:22:43.509: INFO: Pod "pod-8fb11fee-21b7-4e51-8452-81ec301e921c" satisfied condition "Succeeded or Failed" +Oct 19 17:22:43.512: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod pod-8fb11fee-21b7-4e51-8452-81ec301e921c container test-container: +STEP: delete the pod +Oct 19 17:22:43.525: INFO: Waiting for pod pod-8fb11fee-21b7-4e51-8452-81ec301e921c to disappear +Oct 19 17:22:43.528: INFO: Pod pod-8fb11fee-21b7-4e51-8452-81ec301e921c no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:22:43.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-4604" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":336,"skipped":5922,"failed":0} +SSSS +------------------------------ +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + should include custom resource definition resources in discovery documents [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:22:43.537: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename custom-resource-definition +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in custom-resource-definition-8195 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should include custom resource definition resources in discovery documents [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: fetching the /apis discovery document +STEP: finding the apiextensions.k8s.io API group in the /apis discovery document +STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document +STEP: fetching the /apis/apiextensions.k8s.io discovery document +STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document +STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document +STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:22:43.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "custom-resource-definition-8195" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":346,"completed":337,"skipped":5926,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-instrumentation] Events API + should ensure that an event can be fetched, patched, deleted, and listed [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-instrumentation] Events API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:22:43.688: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename events +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in events-3049 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-instrumentation] Events API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 +[It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a test event +STEP: listing events in all namespaces +STEP: listing events in test namespace +STEP: listing events with field selection filtering on source +STEP: listing events with field selection filtering on reportingController +STEP: getting the test event +STEP: patching the test event +STEP: getting the test event +STEP: updating the test event +STEP: getting the test event +STEP: deleting the test event +STEP: listing events in all namespaces +STEP: listing events in test namespace +[AfterEach] [sig-instrumentation] Events API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:22:43.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "events-3049" for this suite. +•{"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":346,"completed":338,"skipped":5963,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl version + should check is all data is printed [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:22:43.879: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-2425 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should check is all data is printed [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 19 17:22:44.014: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-2425 version' +Oct 19 17:22:44.069: INFO: stderr: "" +Oct 19 17:22:44.069: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"22\", GitVersion:\"v1.22.2\", GitCommit:\"8b5a19147530eaac9476b0ab82980b4088bbc1b2\", GitTreeState:\"clean\", BuildDate:\"2021-09-15T21:38:50Z\", GoVersion:\"go1.16.8\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"22\", GitVersion:\"v1.22.2\", GitCommit:\"8b5a19147530eaac9476b0ab82980b4088bbc1b2\", GitTreeState:\"clean\", BuildDate:\"2021-09-15T21:32:41Z\", GoVersion:\"go1.16.8\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:22:44.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-2425" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":346,"completed":339,"skipped":5988,"failed":0} +SSSSSSSS +------------------------------ +[sig-apps] Deployment + RecreateDeployment should delete old pods and create new ones [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:22:44.076: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename deployment +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in deployment-2005 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 +[It] RecreateDeployment should delete old pods and create new ones [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 19 17:22:44.211: INFO: Creating deployment "test-recreate-deployment" +Oct 19 17:22:44.215: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 +Oct 19 17:22:44.220: INFO: deployment "test-recreate-deployment" doesn't have the required revision set +Oct 19 17:22:46.227: INFO: Waiting deployment "test-recreate-deployment" to complete +Oct 19 17:22:46.230: INFO: Triggering a new rollout for deployment "test-recreate-deployment" +Oct 19 17:22:46.237: INFO: Updating deployment test-recreate-deployment +Oct 19 17:22:46.237: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 +Oct 19 17:22:46.274: INFO: Deployment "test-recreate-deployment": +&Deployment{ObjectMeta:{test-recreate-deployment deployment-2005 c7c4075e-1a13-495b-91fb-d51ba267f4b8 42056 2 2021-10-19 17:22:44 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-10-19 17:22:46 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-19 17:22:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc006444408 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2021-10-19 17:22:46 +0000 UTC,LastTransitionTime:2021-10-19 17:22:46 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-85d47dcb4" is progressing.,LastUpdateTime:2021-10-19 17:22:46 +0000 UTC,LastTransitionTime:2021-10-19 17:22:44 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} + +Oct 19 17:22:46.278: INFO: New ReplicaSet "test-recreate-deployment-85d47dcb4" of Deployment "test-recreate-deployment": +&ReplicaSet{ObjectMeta:{test-recreate-deployment-85d47dcb4 deployment-2005 44da6ed0-5b4f-4bd1-a401-af1c7952b70e 42055 1 2021-10-19 17:22:46 +0000 UTC map[name:sample-pod-3 pod-template-hash:85d47dcb4] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment c7c4075e-1a13-495b-91fb-d51ba267f4b8 0xc006444990 0xc006444991}] [] [{kube-controller-manager Update apps/v1 2021-10-19 17:22:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c7c4075e-1a13-495b-91fb-d51ba267f4b8\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-19 17:22:46 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 85d47dcb4,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:85d47dcb4] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc006444a78 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Oct 19 17:22:46.278: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": +Oct 19 17:22:46.278: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-6cb8b65c46 deployment-2005 0a8f57a0-cfbf-4ff9-babb-e319285e7ace 42048 2 2021-10-19 17:22:44 +0000 UTC map[name:sample-pod-3 pod-template-hash:6cb8b65c46] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment c7c4075e-1a13-495b-91fb-d51ba267f4b8 0xc006444837 0xc006444838}] [] [{kube-controller-manager Update apps/v1 2021-10-19 17:22:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c7c4075e-1a13-495b-91fb-d51ba267f4b8\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-10-19 17:22:46 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6cb8b65c46,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:6cb8b65c46] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc006444908 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Oct 19 17:22:46.281: INFO: Pod "test-recreate-deployment-85d47dcb4-lccgl" is not available: +&Pod{ObjectMeta:{test-recreate-deployment-85d47dcb4-lccgl test-recreate-deployment-85d47dcb4- deployment-2005 4e8d3599-d7ab-4bc8-b72d-f1943a23a632 42057 0 2021-10-19 17:22:46 +0000 UTC map[name:sample-pod-3 pod-template-hash:85d47dcb4] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet test-recreate-deployment-85d47dcb4 44da6ed0-5b4f-4bd1-a401-af1c7952b70e 0xc006444f20 0xc006444f21}] [] [{kube-controller-manager Update v1 2021-10-19 17:22:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"44da6ed0-5b4f-4bd1-a401-af1c7952b70e\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-19 17:22:46 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-rd6lq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api.tmhay-ddd.it.internal.staging.k8s.ondemand.com,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rd6lq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 17:22:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 17:22:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 17:22:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-19 17:22:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.1.123,PodIP:,StartTime:2021-10-19 17:22:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:22:46.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-2005" for this suite. +•{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":346,"completed":340,"skipped":5996,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-network] DNS + should provide DNS for ExternalName services [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:22:46.289: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename dns +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-1301 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide DNS for ExternalName services [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a test externalName service +STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1301.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-1301.svc.cluster.local; sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1301.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-1301.svc.cluster.local; sleep 1; done + +STEP: creating a pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Oct 19 17:22:48.512: INFO: DNS probes using dns-test-953344d7-30d9-4edc-bab8-5eb6c94cdd3b succeeded + +STEP: deleting the pod +STEP: changing the externalName to bar.example.com +STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1301.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-1301.svc.cluster.local; sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1301.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-1301.svc.cluster.local; sleep 1; done + +STEP: creating a second pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Oct 19 17:22:50.599: INFO: File wheezy_udp@dns-test-service-3.dns-1301.svc.cluster.local from pod dns-1301/dns-test-3728493d-122a-46c0-83e0-c89ab8e36356 contains 'foo.example.com. +' instead of 'bar.example.com.' +Oct 19 17:22:50.605: INFO: File jessie_udp@dns-test-service-3.dns-1301.svc.cluster.local from pod dns-1301/dns-test-3728493d-122a-46c0-83e0-c89ab8e36356 contains 'foo.example.com. +' instead of 'bar.example.com.' +Oct 19 17:22:50.605: INFO: Lookups using dns-1301/dns-test-3728493d-122a-46c0-83e0-c89ab8e36356 failed for: [wheezy_udp@dns-test-service-3.dns-1301.svc.cluster.local jessie_udp@dns-test-service-3.dns-1301.svc.cluster.local] + +Oct 19 17:22:55.655: INFO: File wheezy_udp@dns-test-service-3.dns-1301.svc.cluster.local from pod dns-1301/dns-test-3728493d-122a-46c0-83e0-c89ab8e36356 contains 'foo.example.com. +' instead of 'bar.example.com.' +Oct 19 17:22:55.660: INFO: File jessie_udp@dns-test-service-3.dns-1301.svc.cluster.local from pod dns-1301/dns-test-3728493d-122a-46c0-83e0-c89ab8e36356 contains 'foo.example.com. +' instead of 'bar.example.com.' +Oct 19 17:22:55.660: INFO: Lookups using dns-1301/dns-test-3728493d-122a-46c0-83e0-c89ab8e36356 failed for: [wheezy_udp@dns-test-service-3.dns-1301.svc.cluster.local jessie_udp@dns-test-service-3.dns-1301.svc.cluster.local] + +Oct 19 17:23:00.613: INFO: File wheezy_udp@dns-test-service-3.dns-1301.svc.cluster.local from pod dns-1301/dns-test-3728493d-122a-46c0-83e0-c89ab8e36356 contains 'foo.example.com. +' instead of 'bar.example.com.' +Oct 19 17:23:00.618: INFO: File jessie_udp@dns-test-service-3.dns-1301.svc.cluster.local from pod dns-1301/dns-test-3728493d-122a-46c0-83e0-c89ab8e36356 contains 'foo.example.com. +' instead of 'bar.example.com.' +Oct 19 17:23:00.618: INFO: Lookups using dns-1301/dns-test-3728493d-122a-46c0-83e0-c89ab8e36356 failed for: [wheezy_udp@dns-test-service-3.dns-1301.svc.cluster.local jessie_udp@dns-test-service-3.dns-1301.svc.cluster.local] + +Oct 19 17:23:05.612: INFO: File wheezy_udp@dns-test-service-3.dns-1301.svc.cluster.local from pod dns-1301/dns-test-3728493d-122a-46c0-83e0-c89ab8e36356 contains 'foo.example.com. +' instead of 'bar.example.com.' +Oct 19 17:23:05.655: INFO: File jessie_udp@dns-test-service-3.dns-1301.svc.cluster.local from pod dns-1301/dns-test-3728493d-122a-46c0-83e0-c89ab8e36356 contains 'foo.example.com. +' instead of 'bar.example.com.' +Oct 19 17:23:05.655: INFO: Lookups using dns-1301/dns-test-3728493d-122a-46c0-83e0-c89ab8e36356 failed for: [wheezy_udp@dns-test-service-3.dns-1301.svc.cluster.local jessie_udp@dns-test-service-3.dns-1301.svc.cluster.local] + +Oct 19 17:23:10.611: INFO: File wheezy_udp@dns-test-service-3.dns-1301.svc.cluster.local from pod dns-1301/dns-test-3728493d-122a-46c0-83e0-c89ab8e36356 contains 'foo.example.com. +' instead of 'bar.example.com.' +Oct 19 17:23:10.655: INFO: File jessie_udp@dns-test-service-3.dns-1301.svc.cluster.local from pod dns-1301/dns-test-3728493d-122a-46c0-83e0-c89ab8e36356 contains 'foo.example.com. +' instead of 'bar.example.com.' +Oct 19 17:23:10.655: INFO: Lookups using dns-1301/dns-test-3728493d-122a-46c0-83e0-c89ab8e36356 failed for: [wheezy_udp@dns-test-service-3.dns-1301.svc.cluster.local jessie_udp@dns-test-service-3.dns-1301.svc.cluster.local] + +Oct 19 17:23:15.611: INFO: File wheezy_udp@dns-test-service-3.dns-1301.svc.cluster.local from pod dns-1301/dns-test-3728493d-122a-46c0-83e0-c89ab8e36356 contains 'foo.example.com. +' instead of 'bar.example.com.' +Oct 19 17:23:15.616: INFO: File jessie_udp@dns-test-service-3.dns-1301.svc.cluster.local from pod dns-1301/dns-test-3728493d-122a-46c0-83e0-c89ab8e36356 contains 'foo.example.com. +' instead of 'bar.example.com.' +Oct 19 17:23:15.616: INFO: Lookups using dns-1301/dns-test-3728493d-122a-46c0-83e0-c89ab8e36356 failed for: [wheezy_udp@dns-test-service-3.dns-1301.svc.cluster.local jessie_udp@dns-test-service-3.dns-1301.svc.cluster.local] + +Oct 19 17:23:20.618: INFO: DNS probes using dns-test-3728493d-122a-46c0-83e0-c89ab8e36356 succeeded + +STEP: deleting the pod +STEP: changing the service to type=ClusterIP +STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1301.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-1301.svc.cluster.local; sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1301.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-1301.svc.cluster.local; sleep 1; done + +STEP: creating a third pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Oct 19 17:23:22.726: INFO: DNS probes using dns-test-e15d938b-de03-4bbe-abbc-9bdf2da49359 succeeded + +STEP: deleting the pod +STEP: deleting the test externalName service +[AfterEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:23:22.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-1301" for this suite. +•{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":346,"completed":341,"skipped":6009,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-node] Variable Expansion + should allow substituting values in a container's args [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:23:22.757: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename var-expansion +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in var-expansion-9326 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should allow substituting values in a container's args [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test substitution in container's args +Oct 19 17:23:22.898: INFO: Waiting up to 5m0s for pod "var-expansion-64cb4e42-b3a4-4c71-9064-9f03264647bc" in namespace "var-expansion-9326" to be "Succeeded or Failed" +Oct 19 17:23:22.901: INFO: Pod "var-expansion-64cb4e42-b3a4-4c71-9064-9f03264647bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.89429ms +Oct 19 17:23:24.906: INFO: Pod "var-expansion-64cb4e42-b3a4-4c71-9064-9f03264647bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007301904s +STEP: Saw pod success +Oct 19 17:23:24.906: INFO: Pod "var-expansion-64cb4e42-b3a4-4c71-9064-9f03264647bc" satisfied condition "Succeeded or Failed" +Oct 19 17:23:24.909: INFO: Trying to get logs from node shoot--it--tmhay-ddd-worker-1-z1-67558-hxdg9 pod var-expansion-64cb4e42-b3a4-4c71-9064-9f03264647bc container dapi-container: +STEP: delete the pod +Oct 19 17:23:24.925: INFO: Waiting for pod var-expansion-64cb4e42-b3a4-4c71-9064-9f03264647bc to disappear +Oct 19 17:23:24.930: INFO: Pod var-expansion-64cb4e42-b3a4-4c71-9064-9f03264647bc no longer exists +[AfterEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:23:24.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-9326" for this suite. +•{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":346,"completed":342,"skipped":6022,"failed":0} +SSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl run pod + should create a pod from an image when restart is Never [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:23:24.941: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-5485 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[BeforeEach] Kubectl run pod + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1524 +[It] should create a pod from an image when restart is Never [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 +Oct 19 17:23:25.075: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-5485 run e2e-test-httpd-pod --restart=Never --pod-running-timeout=2m0s --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-1' +Oct 19 17:23:25.147: INFO: stderr: "" +Oct 19 17:23:25.147: INFO: stdout: "pod/e2e-test-httpd-pod created\n" +STEP: verifying the pod e2e-test-httpd-pod was created +[AfterEach] Kubectl run pod + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1528 +Oct 19 17:23:25.150: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmhay-ddd.it.shoot.staging.k8s-hana.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-5485 delete pods e2e-test-httpd-pod' +Oct 19 17:23:27.383: INFO: stderr: "" +Oct 19 17:23:27.383: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:23:27.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-5485" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":346,"completed":343,"skipped":6031,"failed":0} +SSSSSSS +------------------------------ +[sig-apps] CronJob + should schedule multiple jobs concurrently [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:23:27.392: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename cronjob +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in cronjob-199 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should schedule multiple jobs concurrently [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a cronjob +STEP: Ensuring more than one job is running at a time +STEP: Ensuring at least two running jobs exists by listing jobs explicitly +STEP: Removing cronjob +[AfterEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:25:01.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "cronjob-199" for this suite. +•{"msg":"PASSED [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","total":346,"completed":344,"skipped":6038,"failed":0} + +------------------------------ +[sig-network] Service endpoints latency + should not be very high [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Service endpoints latency + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:25:01.552: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename svc-latency +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in svc-latency-6399 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not be very high [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Oct 19 17:25:01.702: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: creating replication controller svc-latency-rc in namespace svc-latency-6399 +I1019 17:25:01.710450 4339 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-6399, replica count: 1 +I1019 17:25:02.761429 4339 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Oct 19 17:25:02.870: INFO: Created: latency-svc-b4t4v +Oct 19 17:25:02.874: INFO: Got endpoints: latency-svc-b4t4v [11.640031ms] +Oct 19 17:25:02.880: INFO: Created: latency-svc-xn2mb +Oct 19 17:25:02.884: INFO: Created: latency-svc-dtwh4 +Oct 19 17:25:02.884: INFO: Got endpoints: latency-svc-xn2mb [10.448636ms] +Oct 19 17:25:02.886: INFO: Got endpoints: latency-svc-dtwh4 [11.871314ms] +Oct 19 17:25:02.887: INFO: Created: latency-svc-rrb7w +Oct 19 17:25:02.890: INFO: Got endpoints: latency-svc-rrb7w [16.36573ms] +Oct 19 17:25:02.890: INFO: Created: latency-svc-mxqjv +Oct 19 17:25:02.893: INFO: Got endpoints: latency-svc-mxqjv [19.462812ms] +Oct 19 17:25:02.894: INFO: Created: latency-svc-87r25 +Oct 19 17:25:02.897: INFO: Got endpoints: latency-svc-87r25 [22.83169ms] +Oct 19 17:25:02.897: INFO: Created: latency-svc-qxl74 +Oct 19 17:25:02.901: INFO: Got endpoints: latency-svc-qxl74 [26.555485ms] +Oct 19 17:25:02.901: INFO: Created: latency-svc-mqqlk +Oct 19 17:25:02.902: INFO: Got endpoints: latency-svc-mqqlk [28.280871ms] +Oct 19 17:25:02.905: INFO: Created: latency-svc-vnpmx +Oct 19 17:25:02.909: INFO: Got endpoints: latency-svc-vnpmx [35.067489ms] +Oct 19 17:25:02.909: INFO: Created: latency-svc-lr2t6 +Oct 19 17:25:02.914: INFO: Got endpoints: latency-svc-lr2t6 [40.119422ms] +Oct 19 17:25:02.916: INFO: Created: latency-svc-sbqpb +Oct 19 17:25:02.917: INFO: Got endpoints: latency-svc-sbqpb [43.164306ms] +Oct 19 17:25:02.920: INFO: Created: latency-svc-6l8l5 +Oct 19 17:25:02.921: INFO: Got endpoints: latency-svc-6l8l5 [46.834262ms] +Oct 19 17:25:02.924: INFO: Created: latency-svc-rncbm +Oct 19 17:25:02.926: INFO: Got endpoints: latency-svc-rncbm [51.877023ms] +Oct 19 17:25:02.928: INFO: Created: latency-svc-8r7cq +Oct 19 17:25:02.930: INFO: Got endpoints: latency-svc-8r7cq [55.731874ms] +Oct 19 17:25:02.934: INFO: Created: latency-svc-cr27l +Oct 19 17:25:02.937: INFO: Created: latency-svc-7nl2n +Oct 19 17:25:02.937: INFO: Got endpoints: latency-svc-cr27l [62.844706ms] +Oct 19 17:25:02.972: INFO: Got endpoints: latency-svc-7nl2n [98.425234ms] +Oct 19 17:25:02.976: INFO: Created: latency-svc-965pg +Oct 19 17:25:02.977: INFO: Got endpoints: latency-svc-965pg [93.062815ms] +Oct 19 17:25:02.979: INFO: Created: latency-svc-jg2ps +Oct 19 17:25:02.981: INFO: Got endpoints: latency-svc-jg2ps [94.699783ms] +Oct 19 17:25:02.983: INFO: Created: latency-svc-9l4vn +Oct 19 17:25:02.985: INFO: Got endpoints: latency-svc-9l4vn [95.077884ms] +Oct 19 17:25:02.986: INFO: Created: latency-svc-r6s4k +Oct 19 17:25:02.988: INFO: Got endpoints: latency-svc-r6s4k [94.545172ms] +Oct 19 17:25:02.990: INFO: Created: latency-svc-plwr9 +Oct 19 17:25:02.991: INFO: Got endpoints: latency-svc-plwr9 [94.391747ms] +Oct 19 17:25:02.994: INFO: Created: latency-svc-6z6cf +Oct 19 17:25:02.995: INFO: Got endpoints: latency-svc-6z6cf [94.730808ms] +Oct 19 17:25:02.997: INFO: Created: latency-svc-jzcb9 +Oct 19 17:25:02.998: INFO: Got endpoints: latency-svc-jzcb9 [96.113162ms] +Oct 19 17:25:03.002: INFO: Created: latency-svc-vqbtl +Oct 19 17:25:03.004: INFO: Got endpoints: latency-svc-vqbtl [94.788322ms] +Oct 19 17:25:03.006: INFO: Created: latency-svc-tvtvm +Oct 19 17:25:03.009: INFO: Got endpoints: latency-svc-tvtvm [94.981816ms] +Oct 19 17:25:03.011: INFO: Created: latency-svc-jv8lc +Oct 19 17:25:03.012: INFO: Got endpoints: latency-svc-jv8lc [94.47051ms] +Oct 19 17:25:03.014: INFO: Created: latency-svc-9k6js +Oct 19 17:25:03.017: INFO: Got endpoints: latency-svc-9k6js [95.911806ms] +Oct 19 17:25:03.018: INFO: Created: latency-svc-vwnbr +Oct 19 17:25:03.020: INFO: Got endpoints: latency-svc-vwnbr [93.969897ms] +Oct 19 17:25:03.022: INFO: Created: latency-svc-qswnk +Oct 19 17:25:03.025: INFO: Got endpoints: latency-svc-qswnk [95.321076ms] +Oct 19 17:25:03.026: INFO: Created: latency-svc-c2gls +Oct 19 17:25:03.027: INFO: Got endpoints: latency-svc-c2gls [90.004134ms] +Oct 19 17:25:03.079: INFO: Created: latency-svc-lxsd8 +Oct 19 17:25:03.083: INFO: Created: latency-svc-rxfpg +Oct 19 17:25:03.083: INFO: Got endpoints: latency-svc-lxsd8 [110.671397ms] +Oct 19 17:25:03.085: INFO: Got endpoints: latency-svc-rxfpg [107.854455ms] +Oct 19 17:25:03.087: INFO: Created: latency-svc-jcc9t +Oct 19 17:25:03.089: INFO: Got endpoints: latency-svc-jcc9t [108.094906ms] +Oct 19 17:25:03.091: INFO: Created: latency-svc-lztkq +Oct 19 17:25:03.092: INFO: Got endpoints: latency-svc-lztkq [106.740036ms] +Oct 19 17:25:03.095: INFO: Created: latency-svc-ppw48 +Oct 19 17:25:03.099: INFO: Created: latency-svc-ggf7k +Oct 19 17:25:03.102: INFO: Created: latency-svc-dgdtc +Oct 19 17:25:03.105: INFO: Created: latency-svc-p72qq +Oct 19 17:25:03.109: INFO: Created: latency-svc-bmqks +Oct 19 17:25:03.115: INFO: Created: latency-svc-gmpsd +Oct 19 17:25:03.119: INFO: Created: latency-svc-nk5jf +Oct 19 17:25:03.123: INFO: Created: latency-svc-97vd8 +Oct 19 17:25:03.123: INFO: Got endpoints: latency-svc-ppw48 [134.940567ms] +Oct 19 17:25:03.126: INFO: Created: latency-svc-zgpfw +Oct 19 17:25:03.129: INFO: Created: latency-svc-fr62p +Oct 19 17:25:03.132: INFO: Created: latency-svc-57c49 +Oct 19 17:25:03.135: INFO: Created: latency-svc-tgd4b +Oct 19 17:25:03.139: INFO: Created: latency-svc-fpw8x +Oct 19 17:25:03.142: INFO: Created: latency-svc-rfmlv +Oct 19 17:25:03.145: INFO: Created: latency-svc-hnf7d +Oct 19 17:25:03.148: INFO: Created: latency-svc-z65vm +Oct 19 17:25:03.172: INFO: Got endpoints: latency-svc-ggf7k [180.818885ms] +Oct 19 17:25:03.179: INFO: Created: latency-svc-sx6qk +Oct 19 17:25:03.223: INFO: Got endpoints: latency-svc-dgdtc [227.66322ms] +Oct 19 17:25:03.230: INFO: Created: latency-svc-bwjl6 +Oct 19 17:25:03.273: INFO: Got endpoints: latency-svc-p72qq [274.809832ms] +Oct 19 17:25:03.280: INFO: Created: latency-svc-bwb4v +Oct 19 17:25:03.323: INFO: Got endpoints: latency-svc-bmqks [318.91037ms] +Oct 19 17:25:03.329: INFO: Created: latency-svc-fcng8 +Oct 19 17:25:03.376: INFO: Got endpoints: latency-svc-gmpsd [366.369768ms] +Oct 19 17:25:03.382: INFO: Created: latency-svc-bdlhx +Oct 19 17:25:03.423: INFO: Got endpoints: latency-svc-nk5jf [410.907088ms] +Oct 19 17:25:03.429: INFO: Created: latency-svc-w4hvq +Oct 19 17:25:03.472: INFO: Got endpoints: latency-svc-97vd8 [455.119057ms] +Oct 19 17:25:03.482: INFO: Created: latency-svc-w4zzg +Oct 19 17:25:03.522: INFO: Got endpoints: latency-svc-zgpfw [502.247664ms] +Oct 19 17:25:03.529: INFO: Created: latency-svc-8j72f +Oct 19 17:25:03.573: INFO: Got endpoints: latency-svc-fr62p [545.853008ms] +Oct 19 17:25:03.582: INFO: Created: latency-svc-ftvml +Oct 19 17:25:03.623: INFO: Got endpoints: latency-svc-57c49 [597.708389ms] +Oct 19 17:25:03.630: INFO: Created: latency-svc-tp5wx +Oct 19 17:25:03.673: INFO: Got endpoints: latency-svc-tgd4b [589.638788ms] +Oct 19 17:25:03.679: INFO: Created: latency-svc-kxxdf +Oct 19 17:25:03.722: INFO: Got endpoints: latency-svc-fpw8x [637.253162ms] +Oct 19 17:25:03.729: INFO: Created: latency-svc-wf9hb +Oct 19 17:25:03.772: INFO: Got endpoints: latency-svc-rfmlv [683.375396ms] +Oct 19 17:25:03.779: INFO: Created: latency-svc-tn9nh +Oct 19 17:25:03.822: INFO: Got endpoints: latency-svc-hnf7d [730.151697ms] +Oct 19 17:25:03.830: INFO: Created: latency-svc-b6b4d +Oct 19 17:25:03.872: INFO: Got endpoints: latency-svc-z65vm [749.094956ms] +Oct 19 17:25:03.878: INFO: Created: latency-svc-7nplq +Oct 19 17:25:03.930: INFO: Got endpoints: latency-svc-sx6qk [758.386697ms] +Oct 19 17:25:03.937: INFO: Created: latency-svc-7mw9b +Oct 19 17:25:03.973: INFO: Got endpoints: latency-svc-bwjl6 [749.552037ms] +Oct 19 17:25:03.979: INFO: Created: latency-svc-8r7fz +Oct 19 17:25:04.023: INFO: Got endpoints: latency-svc-bwb4v [749.751234ms] +Oct 19 17:25:04.030: INFO: Created: latency-svc-dqw6v +Oct 19 17:25:04.073: INFO: Got endpoints: latency-svc-fcng8 [749.903029ms] +Oct 19 17:25:04.080: INFO: Created: latency-svc-hgphl +Oct 19 17:25:04.123: INFO: Got endpoints: latency-svc-bdlhx [747.504499ms] +Oct 19 17:25:04.129: INFO: Created: latency-svc-mq7m6 +Oct 19 17:25:04.172: INFO: Got endpoints: latency-svc-w4hvq [749.547801ms] +Oct 19 17:25:04.186: INFO: Created: latency-svc-dvrzz +Oct 19 17:25:04.222: INFO: Got endpoints: latency-svc-w4zzg [750.078136ms] +Oct 19 17:25:04.230: INFO: Created: latency-svc-4g6k4 +Oct 19 17:25:04.272: INFO: Got endpoints: latency-svc-8j72f [749.559754ms] +Oct 19 17:25:04.278: INFO: Created: latency-svc-pdhxs +Oct 19 17:25:04.323: INFO: Got endpoints: latency-svc-ftvml [750.186181ms] +Oct 19 17:25:04.330: INFO: Created: latency-svc-2bqgm +Oct 19 17:25:04.372: INFO: Got endpoints: latency-svc-tp5wx [749.44943ms] +Oct 19 17:25:04.379: INFO: Created: latency-svc-7lnx6 +Oct 19 17:25:04.423: INFO: Got endpoints: latency-svc-kxxdf [750.43456ms] +Oct 19 17:25:04.430: INFO: Created: latency-svc-7rf2r +Oct 19 17:25:04.473: INFO: Got endpoints: latency-svc-wf9hb [750.442532ms] +Oct 19 17:25:04.479: INFO: Created: latency-svc-m89gm +Oct 19 17:25:04.522: INFO: Got endpoints: latency-svc-tn9nh [749.8106ms] +Oct 19 17:25:04.528: INFO: Created: latency-svc-25fb5 +Oct 19 17:25:04.573: INFO: Got endpoints: latency-svc-b6b4d [750.672225ms] +Oct 19 17:25:04.580: INFO: Created: latency-svc-wbwhz +Oct 19 17:25:04.623: INFO: Got endpoints: latency-svc-7nplq [750.766888ms] +Oct 19 17:25:04.629: INFO: Created: latency-svc-9gggg +Oct 19 17:25:04.673: INFO: Got endpoints: latency-svc-7mw9b [742.968295ms] +Oct 19 17:25:04.680: INFO: Created: latency-svc-lkwkf +Oct 19 17:25:04.721: INFO: Got endpoints: latency-svc-8r7fz [748.897268ms] +Oct 19 17:25:04.728: INFO: Created: latency-svc-5jwjs +Oct 19 17:25:04.772: INFO: Got endpoints: latency-svc-dqw6v [748.679545ms] +Oct 19 17:25:04.784: INFO: Created: latency-svc-blw5d +Oct 19 17:25:04.874: INFO: Got endpoints: latency-svc-hgphl [801.358074ms] +Oct 19 17:25:04.874: INFO: Got endpoints: latency-svc-mq7m6 [751.145673ms] +Oct 19 17:25:04.881: INFO: Created: latency-svc-znjng +Oct 19 17:25:04.884: INFO: Created: latency-svc-zlmkt +Oct 19 17:25:04.974: INFO: Got endpoints: latency-svc-dvrzz [802.127815ms] +Oct 19 17:25:04.975: INFO: Got endpoints: latency-svc-4g6k4 [752.429722ms] +Oct 19 17:25:04.981: INFO: Created: latency-svc-47nf9 +Oct 19 17:25:04.985: INFO: Created: latency-svc-lkpsx +Oct 19 17:25:05.022: INFO: Got endpoints: latency-svc-pdhxs [750.412473ms] +Oct 19 17:25:05.029: INFO: Created: latency-svc-vcwwz +Oct 19 17:25:05.072: INFO: Got endpoints: latency-svc-2bqgm [748.777402ms] +Oct 19 17:25:05.079: INFO: Created: latency-svc-mcx7l +Oct 19 17:25:05.121: INFO: Got endpoints: latency-svc-7lnx6 [748.911737ms] +Oct 19 17:25:05.134: INFO: Created: latency-svc-2mjcc +Oct 19 17:25:05.173: INFO: Got endpoints: latency-svc-7rf2r [749.231346ms] +Oct 19 17:25:05.179: INFO: Created: latency-svc-hw7l4 +Oct 19 17:25:05.221: INFO: Got endpoints: latency-svc-m89gm [748.507675ms] +Oct 19 17:25:05.228: INFO: Created: latency-svc-r42gh +Oct 19 17:25:05.273: INFO: Got endpoints: latency-svc-25fb5 [751.071486ms] +Oct 19 17:25:05.279: INFO: Created: latency-svc-ct6d8 +Oct 19 17:25:05.323: INFO: Got endpoints: latency-svc-wbwhz [750.303704ms] +Oct 19 17:25:05.329: INFO: Created: latency-svc-fnz6x +Oct 19 17:25:05.373: INFO: Got endpoints: latency-svc-9gggg [750.171022ms] +Oct 19 17:25:05.379: INFO: Created: latency-svc-wm2kq +Oct 19 17:25:05.423: INFO: Got endpoints: latency-svc-lkwkf [749.050607ms] +Oct 19 17:25:05.430: INFO: Created: latency-svc-qzszt +Oct 19 17:25:05.474: INFO: Got endpoints: latency-svc-5jwjs [752.7571ms] +Oct 19 17:25:05.483: INFO: Created: latency-svc-wdswn +Oct 19 17:25:05.523: INFO: Got endpoints: latency-svc-blw5d [751.214718ms] +Oct 19 17:25:05.530: INFO: Created: latency-svc-xkkpn +Oct 19 17:25:05.572: INFO: Got endpoints: latency-svc-znjng [698.112163ms] +Oct 19 17:25:05.578: INFO: Created: latency-svc-5rm5d +Oct 19 17:25:05.623: INFO: Got endpoints: latency-svc-zlmkt [748.467237ms] +Oct 19 17:25:05.629: INFO: Created: latency-svc-fgmjv +Oct 19 17:25:05.672: INFO: Got endpoints: latency-svc-47nf9 [697.756593ms] +Oct 19 17:25:05.678: INFO: Created: latency-svc-ns596 +Oct 19 17:25:05.723: INFO: Got endpoints: latency-svc-lkpsx [748.725444ms] +Oct 19 17:25:05.729: INFO: Created: latency-svc-hf7zh +Oct 19 17:25:05.772: INFO: Got endpoints: latency-svc-vcwwz [749.296064ms] +Oct 19 17:25:05.779: INFO: Created: latency-svc-qnsjl +Oct 19 17:25:05.823: INFO: Got endpoints: latency-svc-mcx7l [751.216204ms] +Oct 19 17:25:05.829: INFO: Created: latency-svc-2b26q +Oct 19 17:25:05.873: INFO: Got endpoints: latency-svc-2mjcc [751.359292ms] +Oct 19 17:25:05.879: INFO: Created: latency-svc-f2sxb +Oct 19 17:25:05.922: INFO: Got endpoints: latency-svc-hw7l4 [749.351151ms] +Oct 19 17:25:05.929: INFO: Created: latency-svc-hg8gv +Oct 19 17:25:05.979: INFO: Got endpoints: latency-svc-r42gh [757.669228ms] +Oct 19 17:25:05.985: INFO: Created: latency-svc-kqs28 +Oct 19 17:25:06.022: INFO: Got endpoints: latency-svc-ct6d8 [748.995168ms] +Oct 19 17:25:06.028: INFO: Created: latency-svc-wsrb7 +Oct 19 17:25:06.073: INFO: Got endpoints: latency-svc-fnz6x [749.363484ms] +Oct 19 17:25:06.079: INFO: Created: latency-svc-xmw89 +Oct 19 17:25:06.122: INFO: Got endpoints: latency-svc-wm2kq [749.022416ms] +Oct 19 17:25:06.128: INFO: Created: latency-svc-qwpwb +Oct 19 17:25:06.172: INFO: Got endpoints: latency-svc-qzszt [749.816327ms] +Oct 19 17:25:06.179: INFO: Created: latency-svc-6zc64 +Oct 19 17:25:06.222: INFO: Got endpoints: latency-svc-wdswn [747.891077ms] +Oct 19 17:25:06.229: INFO: Created: latency-svc-kbpzr +Oct 19 17:25:06.272: INFO: Got endpoints: latency-svc-xkkpn [749.361577ms] +Oct 19 17:25:06.279: INFO: Created: latency-svc-9vxlm +Oct 19 17:25:06.323: INFO: Got endpoints: latency-svc-5rm5d [750.571291ms] +Oct 19 17:25:06.329: INFO: Created: latency-svc-2l4px +Oct 19 17:25:06.372: INFO: Got endpoints: latency-svc-fgmjv [749.690841ms] +Oct 19 17:25:06.379: INFO: Created: latency-svc-hmhgg +Oct 19 17:25:06.423: INFO: Got endpoints: latency-svc-ns596 [751.15321ms] +Oct 19 17:25:06.430: INFO: Created: latency-svc-lc4fg +Oct 19 17:25:06.472: INFO: Got endpoints: latency-svc-hf7zh [749.076337ms] +Oct 19 17:25:06.479: INFO: Created: latency-svc-xfzdb +Oct 19 17:25:06.523: INFO: Got endpoints: latency-svc-qnsjl [751.326954ms] +Oct 19 17:25:06.531: INFO: Created: latency-svc-wntd4 +Oct 19 17:25:06.571: INFO: Got endpoints: latency-svc-2b26q [748.308316ms] +Oct 19 17:25:06.577: INFO: Created: latency-svc-qrxv6 +Oct 19 17:25:06.623: INFO: Got endpoints: latency-svc-f2sxb [750.267419ms] +Oct 19 17:25:06.629: INFO: Created: latency-svc-lrdsq +Oct 19 17:25:06.672: INFO: Got endpoints: latency-svc-hg8gv [749.627295ms] +Oct 19 17:25:06.678: INFO: Created: latency-svc-vvhsf +Oct 19 17:25:06.722: INFO: Got endpoints: latency-svc-kqs28 [742.925259ms] +Oct 19 17:25:06.730: INFO: Created: latency-svc-qc895 +Oct 19 17:25:06.772: INFO: Got endpoints: latency-svc-wsrb7 [749.823799ms] +Oct 19 17:25:06.778: INFO: Created: latency-svc-96hpf +Oct 19 17:25:06.822: INFO: Got endpoints: latency-svc-xmw89 [749.418823ms] +Oct 19 17:25:06.828: INFO: Created: latency-svc-bpslr +Oct 19 17:25:06.873: INFO: Got endpoints: latency-svc-qwpwb [750.489145ms] +Oct 19 17:25:06.879: INFO: Created: latency-svc-vl77s +Oct 19 17:25:06.922: INFO: Got endpoints: latency-svc-6zc64 [749.562849ms] +Oct 19 17:25:06.928: INFO: Created: latency-svc-fk4tq +Oct 19 17:25:06.976: INFO: Got endpoints: latency-svc-kbpzr [753.31356ms] +Oct 19 17:25:06.982: INFO: Created: latency-svc-rxn8s +Oct 19 17:25:07.022: INFO: Got endpoints: latency-svc-9vxlm [749.277355ms] +Oct 19 17:25:07.028: INFO: Created: latency-svc-qckfh +Oct 19 17:25:07.072: INFO: Got endpoints: latency-svc-2l4px [749.265318ms] +Oct 19 17:25:07.079: INFO: Created: latency-svc-6h5hs +Oct 19 17:25:07.122: INFO: Got endpoints: latency-svc-hmhgg [749.529758ms] +Oct 19 17:25:07.128: INFO: Created: latency-svc-pvldd +Oct 19 17:25:07.172: INFO: Got endpoints: latency-svc-lc4fg [748.79989ms] +Oct 19 17:25:07.179: INFO: Created: latency-svc-55mcf +Oct 19 17:25:07.222: INFO: Got endpoints: latency-svc-xfzdb [749.586258ms] +Oct 19 17:25:07.229: INFO: Created: latency-svc-xkpq9 +Oct 19 17:25:07.274: INFO: Got endpoints: latency-svc-wntd4 [750.80553ms] +Oct 19 17:25:07.282: INFO: Created: latency-svc-wxcj6 +Oct 19 17:25:07.323: INFO: Got endpoints: latency-svc-qrxv6 [751.786556ms] +Oct 19 17:25:07.330: INFO: Created: latency-svc-wnc8s +Oct 19 17:25:07.373: INFO: Got endpoints: latency-svc-lrdsq [749.887811ms] +Oct 19 17:25:07.380: INFO: Created: latency-svc-9flhd +Oct 19 17:25:07.422: INFO: Got endpoints: latency-svc-vvhsf [750.364156ms] +Oct 19 17:25:07.429: INFO: Created: latency-svc-qwlbm +Oct 19 17:25:07.473: INFO: Got endpoints: latency-svc-qc895 [750.687716ms] +Oct 19 17:25:07.480: INFO: Created: latency-svc-dvtx7 +Oct 19 17:25:07.522: INFO: Got endpoints: latency-svc-96hpf [750.092769ms] +Oct 19 17:25:07.528: INFO: Created: latency-svc-jkmcg +Oct 19 17:25:07.573: INFO: Got endpoints: latency-svc-bpslr [750.487153ms] +Oct 19 17:25:07.581: INFO: Created: latency-svc-sxswz +Oct 19 17:25:07.623: INFO: Got endpoints: latency-svc-vl77s [750.068884ms] +Oct 19 17:25:07.629: INFO: Created: latency-svc-v2gkk +Oct 19 17:25:07.672: INFO: Got endpoints: latency-svc-fk4tq [750.028763ms] +Oct 19 17:25:07.678: INFO: Created: latency-svc-fd8w2 +Oct 19 17:25:07.734: INFO: Got endpoints: latency-svc-rxn8s [758.164854ms] +Oct 19 17:25:07.750: INFO: Created: latency-svc-wzd9d +Oct 19 17:25:07.775: INFO: Got endpoints: latency-svc-qckfh [753.382144ms] +Oct 19 17:25:07.782: INFO: Created: latency-svc-cv265 +Oct 19 17:25:07.830: INFO: Got endpoints: latency-svc-6h5hs [757.818164ms] +Oct 19 17:25:07.840: INFO: Created: latency-svc-lhcnc +Oct 19 17:25:07.873: INFO: Got endpoints: latency-svc-pvldd [751.286178ms] +Oct 19 17:25:07.879: INFO: Created: latency-svc-9xdww +Oct 19 17:25:07.924: INFO: Got endpoints: latency-svc-55mcf [751.97999ms] +Oct 19 17:25:07.931: INFO: Created: latency-svc-ll94c +Oct 19 17:25:07.995: INFO: Got endpoints: latency-svc-xkpq9 [773.314429ms] +Oct 19 17:25:08.012: INFO: Created: latency-svc-4gn62 +Oct 19 17:25:08.025: INFO: Got endpoints: latency-svc-wxcj6 [751.282499ms] +Oct 19 17:25:08.033: INFO: Created: latency-svc-s2bg9 +Oct 19 17:25:08.072: INFO: Got endpoints: latency-svc-wnc8s [749.099263ms] +Oct 19 17:25:08.079: INFO: Created: latency-svc-8pk8t +Oct 19 17:25:08.122: INFO: Got endpoints: latency-svc-9flhd [749.211757ms] +Oct 19 17:25:08.129: INFO: Created: latency-svc-cq262 +Oct 19 17:25:08.172: INFO: Got endpoints: latency-svc-qwlbm [749.875963ms] +Oct 19 17:25:08.179: INFO: Created: latency-svc-cgw5m +Oct 19 17:25:08.222: INFO: Got endpoints: latency-svc-dvtx7 [749.404633ms] +Oct 19 17:25:08.230: INFO: Created: latency-svc-xhgbs +Oct 19 17:25:08.272: INFO: Got endpoints: latency-svc-jkmcg [749.900815ms] +Oct 19 17:25:08.279: INFO: Created: latency-svc-wbblg +Oct 19 17:25:08.322: INFO: Got endpoints: latency-svc-sxswz [749.747344ms] +Oct 19 17:25:08.329: INFO: Created: latency-svc-k5xm6 +Oct 19 17:25:08.372: INFO: Got endpoints: latency-svc-v2gkk [749.187664ms] +Oct 19 17:25:08.378: INFO: Created: latency-svc-x2h7m +Oct 19 17:25:08.423: INFO: Got endpoints: latency-svc-fd8w2 [750.435059ms] +Oct 19 17:25:08.429: INFO: Created: latency-svc-mbbcc +Oct 19 17:25:08.472: INFO: Got endpoints: latency-svc-wzd9d [737.861379ms] +Oct 19 17:25:08.478: INFO: Created: latency-svc-ccnmk +Oct 19 17:25:08.523: INFO: Got endpoints: latency-svc-cv265 [747.352824ms] +Oct 19 17:25:08.529: INFO: Created: latency-svc-j6bbl +Oct 19 17:25:08.572: INFO: Got endpoints: latency-svc-lhcnc [741.933054ms] +Oct 19 17:25:08.579: INFO: Created: latency-svc-rbswv +Oct 19 17:25:08.622: INFO: Got endpoints: latency-svc-9xdww [748.57553ms] +Oct 19 17:25:08.635: INFO: Created: latency-svc-bwcwm +Oct 19 17:25:08.672: INFO: Got endpoints: latency-svc-ll94c [747.734155ms] +Oct 19 17:25:08.678: INFO: Created: latency-svc-tst2v +Oct 19 17:25:08.723: INFO: Got endpoints: latency-svc-4gn62 [727.167349ms] +Oct 19 17:25:08.729: INFO: Created: latency-svc-2tvjg +Oct 19 17:25:08.773: INFO: Got endpoints: latency-svc-s2bg9 [748.189631ms] +Oct 19 17:25:08.782: INFO: Created: latency-svc-lwrng +Oct 19 17:25:08.823: INFO: Got endpoints: latency-svc-8pk8t [750.897562ms] +Oct 19 17:25:08.831: INFO: Created: latency-svc-pv9x2 +Oct 19 17:25:08.873: INFO: Got endpoints: latency-svc-cq262 [750.422846ms] +Oct 19 17:25:08.879: INFO: Created: latency-svc-jlvt6 +Oct 19 17:25:08.922: INFO: Got endpoints: latency-svc-cgw5m [750.105349ms] +Oct 19 17:25:08.929: INFO: Created: latency-svc-qgctg +Oct 19 17:25:08.973: INFO: Got endpoints: latency-svc-xhgbs [750.416511ms] +Oct 19 17:25:08.979: INFO: Created: latency-svc-gvhxl +Oct 19 17:25:09.022: INFO: Got endpoints: latency-svc-wbblg [749.814586ms] +Oct 19 17:25:09.029: INFO: Created: latency-svc-54hgd +Oct 19 17:25:09.073: INFO: Got endpoints: latency-svc-k5xm6 [750.390965ms] +Oct 19 17:25:09.080: INFO: Created: latency-svc-tgpqm +Oct 19 17:25:09.599: INFO: Got endpoints: latency-svc-x2h7m [1.227544092s] +Oct 19 17:25:09.600: INFO: Got endpoints: latency-svc-mbbcc [1.177200594s] +Oct 19 17:25:09.601: INFO: Got endpoints: latency-svc-rbswv [1.02862036s] +Oct 19 17:25:09.601: INFO: Got endpoints: latency-svc-ccnmk [1.129069142s] +Oct 19 17:25:09.601: INFO: Got endpoints: latency-svc-j6bbl [1.078308595s] +Oct 19 17:25:09.602: INFO: Got endpoints: latency-svc-bwcwm [979.728304ms] +Oct 19 17:25:09.602: INFO: Got endpoints: latency-svc-2tvjg [879.158669ms] +Oct 19 17:25:09.602: INFO: Got endpoints: latency-svc-lwrng [828.389464ms] +Oct 19 17:25:09.602: INFO: Got endpoints: latency-svc-tst2v [929.757999ms] +Oct 19 17:25:09.602: INFO: Got endpoints: latency-svc-pv9x2 [778.72098ms] +Oct 19 17:25:09.608: INFO: Created: latency-svc-492bj +Oct 19 17:25:09.614: INFO: Created: latency-svc-jwqtc +Oct 19 17:25:09.617: INFO: Created: latency-svc-wwwq4 +Oct 19 17:25:09.620: INFO: Created: latency-svc-vd66r +Oct 19 17:25:09.622: INFO: Got endpoints: latency-svc-jlvt6 [749.3355ms] +Oct 19 17:25:09.624: INFO: Created: latency-svc-rzqc8 +Oct 19 17:25:09.628: INFO: Created: latency-svc-tplgn +Oct 19 17:25:09.673: INFO: Created: latency-svc-hm5nk +Oct 19 17:25:09.673: INFO: Got endpoints: latency-svc-qgctg [751.333516ms] +Oct 19 17:25:09.677: INFO: Created: latency-svc-s4kdj +Oct 19 17:25:09.679: INFO: Created: latency-svc-mk7pt +Oct 19 17:25:09.684: INFO: Created: latency-svc-brbqp +Oct 19 17:25:09.688: INFO: Created: latency-svc-vp5f8 +Oct 19 17:25:09.690: INFO: Created: latency-svc-6j962 +Oct 19 17:25:09.723: INFO: Got endpoints: latency-svc-gvhxl [750.53311ms] +Oct 19 17:25:09.732: INFO: Created: latency-svc-6tmwh +Oct 19 17:25:09.773: INFO: Got endpoints: latency-svc-54hgd [751.32821ms] +Oct 19 17:25:09.780: INFO: Created: latency-svc-w4s6z +Oct 19 17:25:09.822: INFO: Got endpoints: latency-svc-tgpqm [748.734867ms] +Oct 19 17:25:09.830: INFO: Created: latency-svc-s448c +Oct 19 17:25:09.874: INFO: Got endpoints: latency-svc-492bj [274.964381ms] +Oct 19 17:25:09.882: INFO: Created: latency-svc-s8fzl +Oct 19 17:25:09.922: INFO: Got endpoints: latency-svc-jwqtc [322.599997ms] +Oct 19 17:25:09.929: INFO: Created: latency-svc-b9wgj +Oct 19 17:25:09.973: INFO: Got endpoints: latency-svc-wwwq4 [371.931489ms] +Oct 19 17:25:09.980: INFO: Created: latency-svc-r65nt +Oct 19 17:25:10.022: INFO: Got endpoints: latency-svc-vd66r [421.417904ms] +Oct 19 17:25:10.029: INFO: Created: latency-svc-sz47l +Oct 19 17:25:10.072: INFO: Got endpoints: latency-svc-rzqc8 [470.88395ms] +Oct 19 17:25:10.079: INFO: Created: latency-svc-lr6ht +Oct 19 17:25:10.122: INFO: Got endpoints: latency-svc-tplgn [520.408388ms] +Oct 19 17:25:10.129: INFO: Created: latency-svc-wzdsq +Oct 19 17:25:10.172: INFO: Got endpoints: latency-svc-hm5nk [569.94196ms] +Oct 19 17:25:10.178: INFO: Created: latency-svc-npcj2 +Oct 19 17:25:10.223: INFO: Got endpoints: latency-svc-s4kdj [621.011786ms] +Oct 19 17:25:10.229: INFO: Created: latency-svc-wjfks +Oct 19 17:25:10.272: INFO: Got endpoints: latency-svc-mk7pt [670.380563ms] +Oct 19 17:25:10.281: INFO: Created: latency-svc-bbtn2 +Oct 19 17:25:10.323: INFO: Got endpoints: latency-svc-brbqp [721.001086ms] +Oct 19 17:25:10.375: INFO: Got endpoints: latency-svc-vp5f8 [753.441742ms] +Oct 19 17:25:10.376: INFO: Created: latency-svc-6m6tf +Oct 19 17:25:10.382: INFO: Created: latency-svc-wwpmb +Oct 19 17:25:10.473: INFO: Got endpoints: latency-svc-6tmwh [750.186165ms] +Oct 19 17:25:10.475: INFO: Got endpoints: latency-svc-6j962 [801.248718ms] +Oct 19 17:25:10.484: INFO: Created: latency-svc-lsc4s +Oct 19 17:25:10.488: INFO: Created: latency-svc-kfx8m +Oct 19 17:25:10.524: INFO: Got endpoints: latency-svc-w4s6z [751.13341ms] +Oct 19 17:25:10.542: INFO: Created: latency-svc-b5npc +Oct 19 17:25:10.572: INFO: Got endpoints: latency-svc-s448c [750.366573ms] +Oct 19 17:25:10.584: INFO: Created: latency-svc-5k2md +Oct 19 17:25:10.624: INFO: Got endpoints: latency-svc-s8fzl [749.431455ms] +Oct 19 17:25:10.636: INFO: Created: latency-svc-pnv8z +Oct 19 17:25:10.672: INFO: Got endpoints: latency-svc-b9wgj [749.445101ms] +Oct 19 17:25:10.678: INFO: Created: latency-svc-pm8b5 +Oct 19 17:25:10.723: INFO: Got endpoints: latency-svc-r65nt [750.324298ms] +Oct 19 17:25:10.773: INFO: Got endpoints: latency-svc-sz47l [750.709197ms] +Oct 19 17:25:10.821: INFO: Got endpoints: latency-svc-lr6ht [749.604003ms] +Oct 19 17:25:10.872: INFO: Got endpoints: latency-svc-wzdsq [749.928723ms] +Oct 19 17:25:10.922: INFO: Got endpoints: latency-svc-npcj2 [749.723347ms] +Oct 19 17:25:10.972: INFO: Got endpoints: latency-svc-wjfks [749.139654ms] +Oct 19 17:25:11.022: INFO: Got endpoints: latency-svc-bbtn2 [749.864011ms] +Oct 19 17:25:11.072: INFO: Got endpoints: latency-svc-6m6tf [749.313925ms] +Oct 19 17:25:11.124: INFO: Got endpoints: latency-svc-wwpmb [748.608621ms] +Oct 19 17:25:11.172: INFO: Got endpoints: latency-svc-lsc4s [698.735227ms] +Oct 19 17:25:11.223: INFO: Got endpoints: latency-svc-kfx8m [748.620714ms] +Oct 19 17:25:11.272: INFO: Got endpoints: latency-svc-b5npc [747.69497ms] +Oct 19 17:25:11.323: INFO: Got endpoints: latency-svc-5k2md [750.754097ms] +Oct 19 17:25:11.372: INFO: Got endpoints: latency-svc-pnv8z [747.992486ms] +Oct 19 17:25:11.422: INFO: Got endpoints: latency-svc-pm8b5 [750.254006ms] +Oct 19 17:25:11.422: INFO: Latencies: [10.448636ms 11.871314ms 16.36573ms 19.462812ms 22.83169ms 26.555485ms 28.280871ms 35.067489ms 40.119422ms 43.164306ms 46.834262ms 51.877023ms 55.731874ms 62.844706ms 90.004134ms 93.062815ms 93.969897ms 94.391747ms 94.47051ms 94.545172ms 94.699783ms 94.730808ms 94.788322ms 94.981816ms 95.077884ms 95.321076ms 95.911806ms 96.113162ms 98.425234ms 106.740036ms 107.854455ms 108.094906ms 110.671397ms 134.940567ms 180.818885ms 227.66322ms 274.809832ms 274.964381ms 318.91037ms 322.599997ms 366.369768ms 371.931489ms 410.907088ms 421.417904ms 455.119057ms 470.88395ms 502.247664ms 520.408388ms 545.853008ms 569.94196ms 589.638788ms 597.708389ms 621.011786ms 637.253162ms 670.380563ms 683.375396ms 697.756593ms 698.112163ms 698.735227ms 721.001086ms 727.167349ms 730.151697ms 737.861379ms 741.933054ms 742.925259ms 742.968295ms 747.352824ms 747.504499ms 747.69497ms 747.734155ms 747.891077ms 747.992486ms 748.189631ms 748.308316ms 748.467237ms 748.507675ms 748.57553ms 748.608621ms 748.620714ms 748.679545ms 748.725444ms 748.734867ms 748.777402ms 748.79989ms 748.897268ms 748.911737ms 748.995168ms 749.022416ms 749.050607ms 749.076337ms 749.094956ms 749.099263ms 749.139654ms 749.187664ms 749.211757ms 749.231346ms 749.265318ms 749.277355ms 749.296064ms 749.313925ms 749.3355ms 749.351151ms 749.361577ms 749.363484ms 749.404633ms 749.418823ms 749.431455ms 749.445101ms 749.44943ms 749.529758ms 749.547801ms 749.552037ms 749.559754ms 749.562849ms 749.586258ms 749.604003ms 749.627295ms 749.690841ms 749.723347ms 749.747344ms 749.751234ms 749.8106ms 749.814586ms 749.816327ms 749.823799ms 749.864011ms 749.875963ms 749.887811ms 749.900815ms 749.903029ms 749.928723ms 750.028763ms 750.068884ms 750.078136ms 750.092769ms 750.105349ms 750.171022ms 750.186165ms 750.186181ms 750.254006ms 750.267419ms 750.303704ms 750.324298ms 750.364156ms 750.366573ms 750.390965ms 750.412473ms 750.416511ms 750.422846ms 750.43456ms 750.435059ms 750.442532ms 750.487153ms 750.489145ms 750.53311ms 750.571291ms 750.672225ms 750.687716ms 750.709197ms 750.754097ms 750.766888ms 750.80553ms 750.897562ms 751.071486ms 751.13341ms 751.145673ms 751.15321ms 751.214718ms 751.216204ms 751.282499ms 751.286178ms 751.326954ms 751.32821ms 751.333516ms 751.359292ms 751.786556ms 751.97999ms 752.429722ms 752.7571ms 753.31356ms 753.382144ms 753.441742ms 757.669228ms 757.818164ms 758.164854ms 758.386697ms 773.314429ms 778.72098ms 801.248718ms 801.358074ms 802.127815ms 828.389464ms 879.158669ms 929.757999ms 979.728304ms 1.02862036s 1.078308595s 1.129069142s 1.177200594s 1.227544092s] +Oct 19 17:25:11.422: INFO: 50 %ile: 749.3355ms +Oct 19 17:25:11.422: INFO: 90 %ile: 753.382144ms +Oct 19 17:25:11.422: INFO: 99 %ile: 1.177200594s +Oct 19 17:25:11.422: INFO: Total sample count: 200 +[AfterEach] [sig-network] Service endpoints latency + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:25:11.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svc-latency-6399" for this suite. +•{"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":346,"completed":345,"skipped":6038,"failed":0} +SSSSSSSS +------------------------------ +[sig-auth] ServiceAccounts + should allow opting out of API token automount [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Oct 19 17:25:11.433: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config +STEP: Building a namespace api object, basename svcaccounts +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in svcaccounts-6491 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should allow opting out of API token automount [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: getting the auto-created API token +Oct 19 17:25:12.109: INFO: created pod pod-service-account-defaultsa +Oct 19 17:25:12.109: INFO: pod pod-service-account-defaultsa service account token volume mount: true +Oct 19 17:25:12.117: INFO: created pod pod-service-account-mountsa +Oct 19 17:25:12.117: INFO: pod pod-service-account-mountsa service account token volume mount: true +Oct 19 17:25:12.123: INFO: created pod pod-service-account-nomountsa +Oct 19 17:25:12.123: INFO: pod pod-service-account-nomountsa service account token volume mount: false +Oct 19 17:25:12.130: INFO: created pod pod-service-account-defaultsa-mountspec +Oct 19 17:25:12.130: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true +Oct 19 17:25:12.136: INFO: created pod pod-service-account-mountsa-mountspec +Oct 19 17:25:12.136: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true +Oct 19 17:25:12.142: INFO: created pod pod-service-account-nomountsa-mountspec +Oct 19 17:25:12.142: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true +Oct 19 17:25:12.148: INFO: created pod pod-service-account-defaultsa-nomountspec +Oct 19 17:25:12.148: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false +Oct 19 17:25:12.171: INFO: created pod pod-service-account-mountsa-nomountspec +Oct 19 17:25:12.171: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false +Oct 19 17:25:12.178: INFO: created pod pod-service-account-nomountsa-nomountspec +Oct 19 17:25:12.178: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false +[AfterEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Oct 19 17:25:12.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svcaccounts-6491" for this suite. +•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":346,"completed":346,"skipped":6046,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSOct 19 17:25:12.185: INFO: Running AfterSuite actions on all nodes +Oct 19 17:25:12.185: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2 +Oct 19 17:25:12.185: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 +Oct 19 17:25:12.185: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2 +Oct 19 17:25:12.185: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 +Oct 19 17:25:12.185: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 +Oct 19 17:25:12.185: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 +Oct 19 17:25:12.185: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 +Oct 19 17:25:12.185: INFO: Running AfterSuite actions on node 1 +Oct 19 17:25:12.185: INFO: Skipping dumping logs from cluster + +JUnit report was created: /tmp/e2e/artifacts/1634658811/junit_01.xml +{"msg":"Test Suite completed","total":346,"completed":346,"skipped":6086,"failed":0} + +Ran 346 of 6432 Specs in 5498.692 seconds +SUCCESS! -- 346 Passed | 0 Failed | 0 Flaked | 0 Pending | 6086 Skipped +PASS + +Ginkgo ran 1 suite in 1h31m40.483530472s +Test Suite Passed diff --git a/v1.22/gardener-openstack/junit_01.xml b/v1.22/gardener-openstack/junit_01.xml new file mode 100644 index 0000000000..75d84c00f3 --- /dev/null +++ b/v1.22/gardener-openstack/junit_01.xml @@ -0,0 +1,18607 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file